6D.ai CEO Matt Miesnieks has been in the AR game since the beginning, and he says the best in the industry have always known the best, native use cases for the technology; the problem was for the technology to catch up to the use cases. Listen as Matt and Alan discuss how the tech and the vision are starting to line up today, with the help of groundbreaking cloud mapping technology.

Alan: Today’s guest is Matt Miesnieks, the CEO at 6D.ai. Matt is renowned as one of the world’s AR industry leaders, and through his influential blog posts and persona around the world. He’s also the co-founder and CEO of 6D.ai, the leading AR cloud platform, which is his third AR startup. He also helped form Super Ventures, which is a platform and V.C. firm investing in AR solutions. He’s built AR system prototypes for Samsung, and had long executive and technical careers in mobile software infrastructure before jumping into AR back in 2009. In his career, he’s been the director of product development of Samsung for VR and AR research and development, co-founder and CEO of Dekko, that created the first mobile holographic and mixed reality platform for iOS, 3D computer vision, and slam tracking. And Layar, he was the worldwide head of customer development; Layar sold to Blippar back in the day. And I want to invite Matt to the show. Thank you so much for joining me on the show.

Matt: Thanks for having me.

Alan: Thanks Matt. This is really an honor to meet you, and you have so much experience in this industry that we can all learn from. You’ve been doing this, it seems like, since the beginning. So why don’t we start with what you’ve seen as the progression of augmented reality over the last decade that you’ve been involved?

Matt: Using words like “decade” brings it home. I mean, I got into AR from working for the company called Openwave that invented the mobile phone browser, and seeing that phones were being connected to the Internet, and started to think about what was next. And realized that interfaces were getting more natural, and we were going to end up connecting our senses to the Internet. You can connect a sense of sight — our dominant sense — that was going to be a really big deal. And I learned that was called augmented reality; that ability to sort of blend digital information and the real world into what you see. I kind of jumped in expecting it to be happening pretty soon. And at the time, there was nobody. At the first AWE conference back then, I think there was 300 people in total. And that was the entire professional AR industry. That included a bunch of researchers, a bunch of, like, science fiction authors, just a bunch of weirdos and a handful of people with some sort of commercial expertise. I think the interesting thing is that, even back then, the use cases and the kind of interactions and those sorts ideas around, these are the things that AR is going to be good for in the earliest days is still the same ones. It wasn’t like anything’s changed; they’ve stayed the same. What’s gotten better is the user experience around those use cases. The technology is improved. There’s like 100x or 50x more processing capacity in our hands. The algorithms have gotten better. And the same use cases that we knew were good ideas back then are now like, oh, these are starting to work now. Enterprises and consumers are starting to get some value out of it.

Alan: So let me interject quickly, for the people listening who are not familiar with this industry: what are these use cases? What are the use cases that have stood the test of time? I know one of them that keeps coming up on almost every podcast that we do is remote assistance. The ability to have other people see what you’re seeing and collaborate with you. So, what are they…?

Matt: Well, probably even to go up a level from that, when I was in mobile and helping to sort of shepherd in that transition from web to mobile Internet, all the first mobile experiences would just take a website, and sort of squish it down to a small screen, and you had, like, eBay on mobile. It was pretty lame. It took a little while to figure out what are the native capabilities of a phone, and what are the use cases that leverage those native capabilities? And it turned [out] to be things like, your phone is with you, and you’ve got a GPS. So things like Uber or Google Maps and directions were native to the phone, that were kind of useless on the PC. We saw things like, your phone was with you, so you could do real-time but short messages always work. So things like Twitter really took off. Not to mention, the camera was there, and Instagram and Snapchat and camera-driven experiences; they’re all native to mobile. And when you think about AR, the best use cases are native to AR.

So we’re finding that, when you think about what AR really is, it really is that ability to have digital content in real time placed in context in the world. What are the scenarios that enable use cases where that’s a negative experience? And so, remote fieldwork is exactly one of those; you want to have someone — basically, an expert — standing at your shoulder, pointing at something or touching things, going, “now press that button; and now, draw a line here; and now, turn this knob,” and AR lets you do that. The expert can be remote, but they can graphically annotate the real world scene and give you that type of engagement. That means that companies can drastically lower their costs of only having all the experts in a centralized location, and they can put lower cost or less-trained employees out in the field, and they can still do as good a job as if the senior guy was there. So that’s one. Another early one that was really obvious is pre-visualization of purchases. Particularly expensive or complicated purchases.

So, you know, the the stereotypical Ikea, “preview my couch in the living room.” That concept is native to AR. And although that people only buy a couch every few years is not a great, repeatable, highly-engaging use case. The idea of just being able to say, “I’m thinking of buying this physical thing but I just want to know what it’s going to look like and how it’s going to work in my world” is very compelling. When you get the user experience right. Companies like Sephora, where they letting you try on makeup. And now, with the latest neural networks and graphics programming, you don’t look like a Ronald McDonald anymore. You look very natural and it looks great, and it looks like what the product would look like as it was applied by a professional makeup artist.

Alan: It’s interesting, I was on a panel with Miriam from Modiface, which was acquired by L’Oreal for their makeup try on. Now they’re venturing out: they’re doing hair try-ons, and jewelry as well. And like you said, it had to look realistic, and it took them a long time to figure that out. When you smile, your lips need to stay red. If you move, the lip gloss can’t be on your cheeks.

Matt: Definitely, most of the problems to be solved — and still being solved — are the technical problems right now; we’ve known these use cases for 10 years. Clothing is another one; I want to try on this outfit, you know, virtually. And that’s one where they use cases are a no-brainer, but the technology isn’t quite good enough yet to get that.

Alan: We’re not quite there.

Matt: The fall of the clothes, and the natural movement of the cloth, and getting the sizing and everything right.

Alan: It’s coming soon, though. I have seen some people working on it; it’s getting there.

Matt: They’ve been working on it for 10 years. Definitely, the advances with neural networks and AI on the graphics side are just making things possible that were never possible before.

Alan: It’s interesting; I wrote a whole article called “The First Killer App for Augmented Reality,” and it was all about virtual try-ons. Whether it’s makeup, shoes, watches, glasses, clothing. Clothing the only one on the list that really wasn’t being used all that well.

Matt: But it also works like enterprise workflows. You know, like construction and engineering companies that want to previsualize what it’s going to look like when we finish building this room and install all the equipment in the pipes and the air conditioning, and is there going to be enough room for some other bit of equipment to be installed? It’s exactly the same use case as the IKEA couch example, but it’s a different industry, and the ROI is much, much faster.

Alan: We just invested in a company — and they’re going to be in stealth for a bit — but they’re solving the problem of taking BIM models or CAD models into AR, overlaying them in the context of the real world, but also looking for errors. Because with products like 6D.ai, you’re now being able to — and we’ll unpack this in a second — but you’re able to create a point cloud map of the real world, overlay the data, and make annotations on that in real time. So for example, in a construction site, if you have an HVAC system that’s off by six inches, sometimes they don’t notice that for a month, and then by the time they realize it, it’s too late and they gotta rip the whole thing down and start over again. And rework in the world of construction is about a 60 billion dollar problem.

Matt: Yeah that’s a big one. In Dekko, our computer vision lead, he literally did his PhD in built-to-plan, for using AR to support built-to-plan for the construction industry. It’s a fantastic use case, and it’s mostly been just limited by the quality of the technology. Even the latest SDKs, like ARKit and ARCore, kind of struggle when you get to a space that’s bigger than, like, a small apartment or a big room. If you want to do a construction site, you need better technology than that just to enable the use case.

Alan: Okay, so you guys have started developing a new foundational framework for capturing point cloud maps — and for those people who don’t know what that is or maybe don’t understand that — maybe you can kind of unpack what are you guys doing at 6D.ai right now, and why is it important to businesses.

Matt: Yeah. Well, we’re a bunch of computer vision experts that spun out of one of the top AR computer vision labs in the world at Oxford University, the Active Vision Lab. That’s kind of what we do, but it’s not why we’re doing it. The reason the company was started was because, like I said, all these use cases were so obvious, especially after being in the space for a long time, that I saw all this amazing commercial potential that was only being unrealized because the technology wasn’t good enough. So we chose to focus on just solving some of these hardest technology problems, and making the solution available for the developers. One of the biggest problems is, how do you make content feel like it’s really part of the world? And the only way you can do that is if that virtual content can interact physically with the world. It means if something goes around the corner, it should disappear and it goes around the corner. Things should bounce off solid objects. The content should really understand that world. And it meant being able to capture a model — a virtual model of the world that perfectly mirrors reality — and then the virtual content can interact with the virtual model, and it matches the real world.

So that was a problem that, in the past, could only be solved with expensive depth cameras or Google Street View-style cars, or take a thousand photos and wait a day for it all to be processed. And so my co-founder at his lab in Oxford really just invented a way to do that in real time on a regular phone, all in-software. No special hardware needed; just wave your phone around, and it builds that 3D model. And that was kind of the germ; the start of the company. And since then, we’ve built it up to be able to support very large spaces. Like, city-scale sort of areas. And we’re adding more and more real-time neural networks, so that we can start to identify and track things that move in the scene. If a person or a dog walks in front of the content, it’s occluded and bumps into things properly.

Alan: If people are listening and they want to try this out, I got to try this program that was built on your backend called Babble Rabbit by Patched Reality. Basically, you scan the area that you’re in, and this rabbit jumps around. And exactly what you said, it can hide behind chairs and couches, and jump off your counter onto the floor. It’s really incredible to watch AR when it’s delivered in perfect synergy with the world around you. It really does make a difference. It’s mind-blowing, because if you look at some of the AR out there — Pokémon Go, for example — the Pokémon look like they’re in front of you, but they’re not moving around anything; they’re not really in context to the world around them, so they do have this kind of look of fake about them. But what you’re talking about is global-scale capture of cloud maps, so that AR interacts seamlessly around you. That’s incredible.

Matt: Yeah, I mean technically, it’s a really big deal. And unfortunately — I mean, I don’t know if “unfortunate” is the word — but it’s one of those problems that, the better we do our job, the less people notice what we’ve done. Because people go, “of course, that’s the way it’s supposed to work. Of course, my rabbit should hide behind the couch.” We’re constantly working just to be invisible, and to let the developer’s content — the experience they create — is what gets the wow effect. But until you get things working properly, and getting the shadows and the lighting pointing in the consistent direction of the real world, and getting the physics and the structure and all that stuff that you don’t want… you want to eliminate everything that’s going to break that sense of illusion. And if you can maintain that sense of illusion and make it really compelling, people just get incredibly absorbed because it is… it becomes magic.

Alan: One of the things that one of our developers was working on was taking the geolocation — so, the weather API; just pulling the weather API from the cloud — and knowing, OK, how cloudy is it? Based on the weather API, how dark should the shadow be? And depending on which direction your phone is pointing, which direction should the shadow point, based on your geolocation. How cool is that?

Matt: We’re barely scratching the surface of when you start mashing up different APIs. The public transport timetables, and can you have a giant bat signal on top of your bus as it heads down the streets towards you, and you can look for it. Who knows? I struggled with…

Alan: All I really want is, at a big festival, to be able to find my friends. Is that too hard?

Matt: Well hopefully, we’ll have that up-and-running before the end of the year.

Alan: The guys at Coachella, Sam Schoonover, he’s gonna be on the show as well. I don’t know if you saw that Coachella did a massive thing this year, where one of the tents was completely AR-enabled. So it used proximity, like geolocation, but also image recognition to be able to place these digital objects in your world on one of the stages. And then they also created an AR scavenger hunt, where if you went around and found different things, you could collect coins to buy t-shirts and stuff like that. Coachella is really pushing the limits with this technology, as well. It’s pretty cool to watch a consumer brand–

Matt: I was down there last weekend talking to Sam about exactly this.

Alan: Oh, awesome!

Matt: And showing him how it could be much, much better than anything he could imagine. He got quite excited.

Alan: I bet; he’s so enthusiastic about the stuff.

Matt: Yeah, we’re excited about the potential there. I was down there because we had… I mean, we’re all computer vision engineers; our customers are all big companies solving the hardest challenges in AR. But one of the things we do is we work with a lot of really high-profile artists to push the limits of what the tech can do. At Coachella, we work with Aphex Twin. They took some of our neural network computer vision technology, and they used it in their live show. They’d point one of their cameras at the crowd and run that camera feed through our software, and then project it back up onto the big screen live. You’d have all these psychedelic effects on individuals in the crowd, and it was just really, really amazing.

Alan: All I can say is, that’s sick!

Matt: It was that thing that went so far beyond what we could imagine. It was so much better. It’s like, wow, you sort of realize that what you’ve built, you can do more with it than even you’d imagined.

Alan: It’s incredible. My last company… I don’t know if you’ve ever seen the big see-through touchscreen DJ controller emulator, but we made basically a touchscreen midi controller before there were touch screens in 2010. And it was see-through, so the audience could see what the artists were doing as well. And we had the opportunity to work with Infected Mushroom, and Morgan Paige, and Linkin Park. And seeing what these guys did… the guys at Linkin Park, Mike Shinoda did something really amazing. He took our midi controller and made a keyboard out of it, but it looked nothing like a keyboard at all. It was just this kind of series of buttons everywhere. And he played it as if it was a customized keyboard for him. It’s like, we had never even considered that.

Matt: Yeah, Weirdcore, who does all of Aphex’s visuals, he did the same thing. He drives all the displays off of a sample controller. So one of those little square pads, with all the different colored light-up buttons. You normally use it to fire off samples, but he drives all the video through that and plays it like an instrument to get all the screen effects, and dropping different visual patches on top. Yeah. It’s impressive.

Alan: Yeah it’s pretty cool. You know, when you go to festivals like Coachella or EDC, you realize pretty quickly that those guys are pushing the absolute limits of this technology. I mean, you go to some show, and it’s got a thousand different laser beams coming out at you, and the video is all synchronized to lasers, is synchronized to the lights, are synchronized to the music; and you’re just like, how is this even possible? Then you’ve got 3D projection mapping into the crowd. I think the electronic music scene has really taken to technology more than any other.

Matt: Yeah. Yeah, I know. And we’re just excited to learn from it. I mean, it’s definitely not our customers or a target market in a commercial sense, but from product learnings and just expanding the realm of what we thought was possible? It’s just been fantastic to work with these guys.

Alan: So with that, what is one of the best use cases or case studies of virtual, augmented, or mixed reality that you’ve seen to date? That kind of made you go, wow?

Matt: Besides our own? It’s…

Alan: Your own as well, or whatever, yeah.

Matt: I think Snap are doing probably the most interesting work in AR right now. They did some stuff with landmarks recently, where they made Big Ben vomit rainbows and stuff. I think most people really underestimate how good Snap is that AR. They’re obviously the number one AR company in the world in terms of usage.

Alan: By FAR. People don’t realize it, but by far.

Matt: Yeah, by far. Their research team, and the sort of quality of the organization that built to this stuff is as good as anybodys.

Alan: So, would Snap be, then, a potential customer? Or use your 6D.ai platform for creating even more immersive–?

Matt: Actually, I mean… right now, their concern is that if they were our customer, that they would just turn the tap on and we’d drown instantly. We’re friends with a whole bunch of people; again, being around for 10 years, most of the guys that run the Snap AR team are folks I’ve known in their previous company and my previous companies. So potentially, who knows? It’s a bit early to say. We’d love, obviously, to have a customer that that big. But right now, we’re still 15 people, and I don’t think we could honestly support them as a direct customer relationship. But there’s lots of ways to potentially partner.

Alan: Amazing. So you mentioned Snap. What are some other, you know…?

Matt: The other one — this sort of thing is the stuff that isn’t obvious to people — but the other big one that I think is a big deal is the Microsoft Dynamics 365 apps. When they launched the HoloLens 2, they also announced this suite of templates called the dynamic 365. They’re kind of like an app skeleton; Microsoft’s customers could build… and I can’t remember which types of use cases they took, but they took things like, for example, a field service type use case. They basically got a skeleton application for that. One of the big challenges in AR right now is building these apps is still complex. The tools are pretty [new], and Unity and things, you need to invest a fair bit of time and effort to build something. The fact that Microsoft kind of recognized that and put this work into saying, “look, here’s kind of a semi-turnkey [template], ready to go… you just need to configure and customize and you now have an enterprise AR app, ready to go. That was impressive. And I don’t think it’s being recognized enough.

Alan: Yeah, I think their whole idea with the HoloLens 2 was, you know… the learnings with HoloLens 1, they took the feedback from the users and actually did something, which is kind of unique in technology; they actually listen to people, imagine that! But one of the things that I think was a problem for everybody, is that they bought these devices and then put them on and went, okay, now what? No out-of-the-box use cases. There was no easy way to make anything. You had some great companies — Finger Food Studios in Vancouver, and Look — you had some great companies making great content, but they had to start from scratch every single time.

Matt: Yeah.

Alan: If you’re mining company, and your AR development company says, “hey, we’re gonna make this thing for you, and it’s going to be a quarter of a million dollars,” that’s great the first time. But when they come back to you and say, “we’re going to make the next thing for you, it’s going to be a quarter of a million dollars,” that’s like, something’s not right. So Microsoft said, “hey, let’s make it out-of-the-box useful. I think that was the best thing that they did with HoloLens 2. Aside from the improvements of the actual hardware itself.

Matt: Totally. Yeah, totally. There’s so many similarities between the way the smartphone ecosystem emerged and the way the AR ecosystem is emerging, and all of those peripherals. And you said it, they’re peripheral, but really they’re core to the end-to-end user experience. It’s gonna be it’s probably gonna take a bit longer than anyone realizes before everything’s in place, but it’s happening. I’m just sitting here, watching history repeat in many ways.

Alan: Okay, so you’ve seen a lot. You’ve been you’ve been doing this; you are…I want to say “OG.” But you’ve invested in companies through Super Ventures; I guess, what are some of the companies that you’ve either invested in, or that you see, that are really bringing value now? In enterprise, or in retail, or marketing, or sales? What are the companies that are driving value, now that businesses can look up research and go, “hey, that will work for me?” Because really, when it comes down to it, it comes down to those specific use cases. And until we have ubiquitous systems that work across the enterprise, it’s going to be these one-off solutions for now, I think.

Matt: I think that one of the mistakes that every… not everyone made this mistake, but a lot of people fell into this trap of believing that gaming is where new technology gets adopted first, and gaming and entertainment are the right places to focus for success in VR and AR. I disagreed with that from the beginning. I thought gaming is where GPUs took off, and 3D graphics took off. But really, PCs took off in the enterprise, and mobile phones took off for the enterprise, and Smartphones took off from the enterprise. And to me, AR just seems like it’s more like that. So, the enterprise has always been the right place. And then looking at, well, what’s stopping enterprises? And largely it was technology problems; both hardware and software. When I look at, definitely, starting my company 6D, and where to where to put my energy, it was “solve the hardest technical problems and enable use cases that are going to work for enterprises.”

When I look at the market, the companies that are taking that same sort of strategy — everything from Microsoft down to startups — are the ones that are doing relatively well. If you’re going to go with something that’s consumer-oriented, you’ve got to either hope for a massive hit, consumer hit (generally based on some high profile IP like Pokémon), or you’ve got to have some huge distribution available to you like Snap or Facebook or Instagram have. So yeah, I kind of advise startups, “whatever you’re building, you need to be able to sell it for $100,000. And if you can’t tell me the name and phone number of a person today who would pay $100,000 for this, then you’re probably gonna struggle for a few years. You’re going to struggle for longer than you have runway in your bank.”

Alan: That’s some sage advice.

Matt: Well, only because I’ve made all those mistakes myself (laughs). Don’t be stupid like I was.

Alan: That’s the best advice ever, from somebody who’s literally done it. “Don’t do what I did.”

Matt: It was worse; I was right. Like my Dekko, and what we built at Samsung were exactly the same technology that ARKit and ARCore and Magic Leap and others built. Like, verbatim, the same technology. I just love that having the correct vision, and building the correct technology, and getting everything right, you can still fail. Just because other aspects of the market that you can’t control aren’t ready. So build something that you can sell today, for enough money today to keep you going.

Alan: You know, it’s interesting. We’ve taken this exact theory, and we’re… you know, I can’t announce anything on the show right now, but I’ll tell you afterwards, but one of the things that we’re working on is being able to help startups do exactly this. And our theory is exactly what you said. Once they sell to — ours is a different level — but once they sell over $250,000 worth of whatever it is they’re selling, that’s when we step in and invest in them, and will match that. So I agree with you wholeheartedly. When I got into VR, I didn’t… I knew nothing about this industry five years ago. I tried VR for the first time. Blew my mind. Went, “that’s it. I’m in it.” Since then, I took a really broad approach; we did everything from 360 videos, AR apps, VR apps, VR training, Photogrammetry, 3D modelling — you name it; we did everything. And the idea was to study the industry, figure out what works, what customers want, what they’re willing to pay for, what they’re not willing to pay for. And through doing all of that, we’ve taken this really amazing lens of the infrastructure, and what works and what doesn’t work. And I think you have that from a decade of experience, and I think your advice is very spot-on, especially for startups listening to this. (to audience) Startups! Pay attention!

Matt: (to audience) Don’t do what I did. Yes, yes. Exactly.

Alan: Let me ask you — in the interest of time, I want to get the most value for people. How would a company start to evaluate or get started in using these technologies? What would your advice be to somebody that’s listening to this podcast, and they’re saying, “wow, I have a HoloLens,” or, “our company bought one; it’s sitting on the shelf, nobody’s using it.” What would your advice be to companies to get started?

Matt: Educate yourself. That’s the big one. Like, there’s nothing… there’s no turnkey, off-the-shelf, kind of like… if you call us, we’ll say we probably couldn’t help straight away, unless you have that education, and you know what problem we need to solve, and you know what you want to do. There’s no real agencies out there that are “hey, talk to them and they’ll definitely figure it out for you.”

Alan: Well that’s us. I’m going to put a plug in for Metavrse.

Matt: Kay, so that’s you guys.

Alan: I’ve never put a plug in, but that’s literally what we do for a company.

Matt: OK. But yeah, we find that people have the same bad ideas over and over again, or they have the same misconceptions about what AR is good for, or how things actually work. By far the best thing you can do if you want to get in early is educate yourself, on what some of these capabilities are, what the constraints are in the technology, where they use cases make sense. It may make sense for your company right now, or maybe a couple of years. I’ve actually written a bunch of blogs where I’ve tried to capture as much as I know and have learned about use cases and design problems and you know how the tech works and all those sorts of things.

Alan: Where can people find that?

Matt: Just on Medium. If you google my name from the podcast — I’m the only [Matt Miesnieks] on earth — and find either the 6D.ai/blog or google my name on my Medium posts.

Alan: I’ll put a link to your Medium in the show notes.

Matt: Thanks. And then, do experiments. Start building on ARKit if you can. Poke around and download apps, and you’ll see all the problems that are there. We’ve always found like, all our customers right now — or, the only ones we can cope with — have usually tried to do something themselves. Like at Coachella, you talk to Sam and he says, “look, this is the best that could be done today, but we’ve got all these problems that we wish we could get around.” And then we can say, “hey, we’ve got solutions for all of those,” and it’s a very easy conversation. But if people don’t really understand those problems, it’s hard, you know, this is going to take a while to get you to this point.

Alan: Yeah I found that people like Sam, he’s actually rolled up his sleeves and built stuff. I think until you’ve run into these problems… we built a Web AR application, oh man, must be two and a half years ago now. And Web AR back then… Web AR now doesn’t work very well; Back then, it sucked! The product that we put out for the client worked perfectly. It did exactly what they wanted. But my development team threatened to kill me if I ever sold it again.

Matt: Yeah.

Alan: But having gone through that problem, the first question out of people’s mouths when they ask about AR is, “can we do this web-based?” and I’m like, “yes you can, but it’s going to be 10 times the time and 10 times the price.” It’s just not there yet. And having realistic expectations where a professional can outline that 1, 3, and 5-year roadmap, I think, is key to that.

Matt: Yes. Yeah. That is the key. Think sort of medium-term for this stuff, and view the first couple of iterations as, really, learning experiments.

Alan: Yeah. And it doesn’t have to be expensive. My last guest on the show was Paul Boris, and he was saying that what they’re doing is, they’re doing enterprise training and equipment and maintenance, using AR. So you know when they’re doing it, this is an enterprise, and for them to build a module it’s about 100 to 200 thousand dollars. But if you’re just experimenting with AR, I mean, you can go get ARKit and ARCore and start messing around it. You can mess around with Amazon Sumerian. You can go on Snapchat and build some filters and try that on their lens studio. You can go on Facebook and start messing around with their, um, I think it’s called Spark AR. So there are a number of different platforms; you can start experimenting, and for zero cost, really.

I always encourage people to do that, especially if they’re in the research and education mode. Try it and fiddle around. One of the things that came up was what are some of the job… or, not job, but I guess roles that would be required to do this? Things like Unity developer or 3D modeller and stuff like that. So I think by slowly introducing it, you’ll realize what’s necessary.

Matt: Yeah, I know; there’s a lot to it. You don’t want to be building big, in-house teams to do this stuff, when you can… you get sucked into trying to boil the ocean. Everything from small in-house projects becomes something with a continuously rolling scope through to… companies like Magic Leap or others — Microsoft — that are basically trying to build entire ecosystems by themselves. And it’s one of the big temptations for AR; the more you learn about it, the more attractive it is, and exciting it is, and the more you want to go after it.

Alan: It’s the shiny penny problem. “AR can solve every problem; let’s do it all!” (both laugh) Amazing. So speaking of that — speaking of problems — and this will be our last question because I know you’re very busy, but what problem in the world would you want to see solved using XR technologies?

Matt: It’s hard to say. People say, “what’s the killer app for smartphones,” when we’re doing the Internet on phones. And really that there was no killer app; the “killer app” is it just lets us connect to everything the Internet offers in a more natural, engaging, convenient way. Phones have given us superpowers in a way; our brains are connected to the sum of human knowledge. We can communicate with anyone, anywhere, anytime. AR is going to do more of that. It’s gonna give us more superpowers. It’s going to expand our sense of sight, expand our ability to know things in real time. It’s that potential that excites me. You know, I feel a responsibility that as this type of power is developed, how is it going to be used responsibly? How can we at least try and imagine some of the things that we try and prevent? But yeah, if I have to think what is that attraction to me, to the space, it really is just that. Those new types of superpowers that were going to enable for people.

Looking for more insights on XR and the future of business? Subscribe to our podcast on iTunes, Google Play, or Spotify. You can also follow us on Twitter @XRforBusiness and connect with Alan on LinkedIn.

  • 35:35 https://medium.com/@mattmiesnieks

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top