Printing a Model of Light with XR, featuring Bentley’s Greg Demchak

Used to be the best way to plan a three-dimensional construction project was on a two-dimensional blueprint, or perhaps a wooden model. We live in an era where entire digital twins — models made of light — can be used, but not everyone is. Bentley Systems’ Greg Demchak drops by to explain why that needs to change.

Alan: Thank you for joining the XR for Business Podcast with your host Alan Smithson. Today’s guest is Greg Demchak from Bentley Systems. Greg has been designing and driving the development of immersive digital simulation for architecture, engineering, and construction markets for the last 20 years. Educated as an architect, he transitioned to software design after completing a degree in design computation at MIT. He went on to become a senior user experience designer for the Autodesk Revit product, product manager for SYNCHRO 4D software platform — now owned by Bentley Systems — and currently leads the mixed reality team for Bentley Systems. He’s been pushing the envelope of this technology and software for the Microsoft Hololens, and recently built an app for the global launch event of the next generation Hololens 2. To learn more about the work that Greg and his team at Bentley are doing, visit bentley.com. Greg, welcome to the show.

Greg: Thanks, Alan. How’s it going?

Alan: It’s going fantastic. I wanted to say thank you so much for taking the time to join us here. And let’s kind of unpack the work that you do at Bentley Systems. From a 10,000 foot view, what do you guys do?

Greg: Yeah. So Bentley Systems — just to frame that — is a global software company focused on engineering and infrastructure, architecture, and construction software. So it’s– we basically produce software for the built environment. So anything from bridge design, to high-rise construction, to infrastructure that needs to be modeled. And it’s a platform that serves that industry across the board.

Alan: Well, that’s a big industry, considering there’s the equivalent of Manhattan, the size of Manhattan being built every single month somewhere in the world. So you work with large infrastructure projects, building skyscrapers, bridges, infrastructure. How does XR fit into that?

Greg: It’s a good question. So the way we see XR fitting into this is– and you’ll see this term, it’s really becoming — I think — quite popular now in the industry, is this idea of the digital twin. And what started out as 2D drafting and then sort of evolved into 3D models, and then this idea of building information modeling is evolving into this idea of the digital twin, which is that any given building or asset or a piece of infrastructure can have a parallel digital representation of itself as a 3D model, and then also now as a 4D model, which is to say that the model evolves and changes through time, just like the physical building. And the XR piece is a really cool way to basically bridge that digital and physical space in a kind of a natural way. So that’s where we are developing on top of the Hololens platform. It’s basically a way to take those digital assets, and then render those assets as digital artifacts or 3D models or information in the context of the physical space. So that’s had the opportunity. These buildings, these infrastructure assets are evolving and changing over time. And you can basically render digital parts of that through the Hololens and see a mixed reality view of the world.

Alan: So, for example, you’ve got a– let’s just use a building, a skyscraper, you’re building a building, you’ve got the Revit models or the BIM models or the CAD models. So let’s just first of all — for people that maybe don’t understand what those mean — what are those three terms mean and how are they being converted into XR technologies?

Greg: I’ll just start with that idea. Like the building information model and that could be any 3D model. And it’s not just Revit, it could be a Bentley product, could be Tekla product, could be from any 3D CAD system, even something as simple as SketchUp, if people are aware of that. It’s a 3D model which has got dimensions and shape and size and color, and then these models have information or metadata embedded in it. So objects now know like, is it at door, is it a window, is it a chair, is it a beam? And so we can take those objects that are modeled in a CAD system and pull them into a head-mounted display — the Microsoft Hololens — and see those 3D models align with your physical space. Imagine you’re walking through a building and you want to see — in this case, we focus on construction — the next two weeks of construction that you have coming. You put on the Hololens and see that digital model projected into physical space, versus looking at that on a computer screen or referencing construction documents in a paper format. So it’s really about bringing those 3D models into the physical space.

Alan: What’s the benefit of that? I’ve got my blueprints and I’m building the building. What does that afford users of this? What’s the benefit to putting this into, let’s say, a Hololens or similar headset? Why would anybody want to do that?

Greg: So like what we’ve seen with like a lot of our customers, it’s the ability to basically see those models, that content in the context of real scale. So it’s immersive. The interaction patterns, too, are way more simple. Instead of having to learn a mouse or a keyboard or spin around a model, at least with the Hololens — and also I think this touches on VR too — you can basically– you put the headset on and the user interface is basically your head. You move around, you look around, and then you see the model just by moving your head and your gaze throughout the space. And then another thing that’s kind of cool about the next version of Hololens 2 — which we can probably get into — is all the interaction patterns are just by using your hands. So you reach out and grab things, controls, models, and it’s all just near interaction with your hands. And there’s not a lot of learning to do when it comes to like typical CAD systems.

Alan: People get their CAD models in. Is there an automatic converter or– I’ve got my BIM models, my building information models, I’ve got my CAD. What do I do?

Greg: Well, there’s a lot of ways to do it. I mean, a lot of people just– anything you can get into Unity, you can basically get into VR or mixed reality experience, unless they’re like custom one-off apps. And that’s how we started, in just prototyping. But that basically led us to realize we could build a generic pipeline. So we’ve built a technology stock on the cloud on top of Azure, basically, that lets you import geometry models into our platform — called Synchro, which is this 4D construction animation tool — and then automatically we create a web endpoint that the Hololens can connect to, and then pull that geometry down. So you literally just log in into the server, and the port information and a 3D filter and then we can send that model geometry into the Hololens. We tried to lower the bar of entry, to make it as easy for people to get into it as possible. But of course, if you want to if you want to just like hack and get into it, any 3D asset you can load into Unity would basically allow you to build apps for this platform or Hololens.

Alan: So I would assume — and then maybe I’m wrong here — but I would assume this is very much experimental, or still at the very beginning phases. Are you starting to see customers use this on job sites as part of their daily workflow, or is this still kind of experimental? Where are we on the timeline?

Greg: So I think we’re still in that early experimental phase. We got involved with the early days with Microsoft through a series of hackathons that Microsoft, they went around the country basically and introduced the [Hololens] V1. And that was like two and a half, three years ago. And at that point, they had zero users, right? They were trying to connect with developers and sort of that hacker culture. We got involved in a hackathon in Boston back then. And I think what’s interesting about this is back then I brought a customer from a from Duke Energy — one of our users — and we brought their models, their content, their 4D kind of simulation. And at that point, we were just using FBX and sort of animating content that way through export. But we got real signal from the user that there is value here, and they started using that app and testing it in context. But it wasn’t– I wouldn’t say it was fully adopted, but we had a lot of early adopters that helped guide our experience and our feature set, and where we were taking the application. And this was sort of in the nuclear space. It was in heavy infrastructure projects, large tunnels. And we also piloted it with a lot of high-rise construction projects too, just to test it.

So I think that was good. And then a lot of that feedback went back to Microsoft, to help inform where the next generation headset was going. And I think largely they tried to address a lot of those issues, issues that we were seeing in the field, that would prevent that kind of widespread adoption. So comfort, ease of use, field of view, processing power, those kind of things. So I think they’ve made a lot of improvement in security. The fact that it can integrate with a hardhat or not, was also another big win, obviously, in the construction space.

Alan: It feels like Microsoft really did it right, where they came out with a device that was very functional. And then instead of just having a bunch of hardware designers design the next one, they actually listened to the customers and said, “What are the limitations?” And I’m assuming they probably all got the same– everybody’s saying “It’s really heavy on my nose. Crushing my face.” [chuckles]

Greg: Yeah, yeah, I know. I’ve done hundreds of demos, and I’m sure you have too, you know what it’s like. And it’s crazy too, because now having the Hololens 2, it’s almost painful to go back to give a demo on 1.

Alan: We actually sold our Hololens 1’s, we got rid of them. It’s just like, every time I put it on I get a headache. And it wasn’t the optics, it was the weight of the thing, and it’s funny because some– the other day I was explaining to somebody and they said, “Oh, by the way, there’s a headstrap inside the box.”

Greg: Oh, yeah! [laughs]

Alan: I was like, “Oh man, I feel dumb right now.” [laughs]

Greg: So a lot of changes with that. So, yeah, the fact that it was so front-heavy, it would weigh down on your nose. Most people were always complaining about the narrow field of view, all these kind of things, the way you put it on. The interaction pattern, by the way — the air tap thing — how many times if you give a demo, and like you’re just trying to teach someone how to tap these holograms?

Alan: I did a talk the other day, and explained it that anybody over 30 struggles with that air tap. They poke at it, they reach for it. They just try to do anything but the actual air tap. And then you give it to a kid, anybody under 30 and you say, “Here’s the interaction,” you show them once. And boom boom, they got it. They’ve got their hands in there, they’re rotating things. They just figure it out immediately. So you’ve been kind of working with this for a couple of years. You’ve figured out Microsoft Hololens, we can import these models. Where’s the ROI being driven from this? Why would I take the time to put this in a Hololens, stand in a construction site, put this on my head? Like, where are people– how would people use this, and where are they using it?

Greg: For me, this goes back to like the source date. I think the first step is what’s the ROI in making any 3D model? In the construction space this has been this evolution, right? Like we have had to go from evolving from 2D drawings — which goes way back, like hundreds of years — to then 3D modeling, which is with like plywood and chipboard and wood and whatever. That’s like the way you would build a 3D model is like you’d literally just cut it out of wood or foam core or something like this. And then eventually you could do 3D printing, right? You could 3D print from a CAD model and have a physical 3D print to look at. But that always kind of predicated on the fact that you’re building a 3D model. So like the real question I think is like what’s that initial value proposition of the 3D model? And I think it’s– this is my belief, I think that anytime you make a mockup, a 3D model, a simulation prototype, it lets everyone understand what your intent is and then provide feedback and basically interrogate the mockup, and drive towards a better product in the end.

And manufacturing AR space has been working in that kind of space for a long time. And then the architecture construction industry is kind of slowly been picking this up in the last 15, 20 years. So what I see is that investment in the 3D model, and then what we also bring is the fourth dimension, which is the construction schedules, it’s construction animation over time. I think the value that the 4D animation gives to a construction team is they can see into their future, like what’s going to happen. And it means that by looking in the future, they can identify risk and try and prevent problems from happening in a potential future. So it’s a way to kind of immerse yourself in a digital construction workflow, to just understand what’s coming next, what’s in your future. And then we’ve basically enabled all that content that you can author in these CAD systems — like Synchro and Revit whatnot, 3D modeling systems — and just carry that out into an immersive perspective experience with the Hololens. And then going a bit further than just like a VR experience, you can actually go and position those models in a room instead of doing a 3D print. Basically, sometimes I refer this as like you’re printing the model with light with the Hololens. That’s like a print of pixels in light and we can animate that model. And so you could never do–

Alan: Hard to animate a printed physical model. You can’t.

Greg: Exactly. There’s no way to animate a physical 3D print made of plastic. So for one, that gives you that, full kind of immersive simulation. The other thing I’ve seen customers think is quite interesting, is to be kind of standing in context and see that hologram lined up with your physical space. I think that gets really interesting what it means to see that CAD model as an overlay, that you can interact with and it gives you like X-ray vision. So that’s kind of like pre-construction planning and kind of pre-simulation. Another situation we’re looking at is– OK, during construction, what if we start scanning the construction sequence as it occurs and produce a photogrammetry mesh, and then we could load that mesh back into the Hololens later for an operations and maintenance user. And if you think about that, now you’re going beyond just the CAD model rendering in context, but like a photogrammetically accurate mesh capture. And then you really have kind of an x-ray opportune vision of the world.

Alan: I would think also being able to look at models on the desktop — so kind of God view — brings people together and you can look at these things animations, but then when you bring it into the actual building, overlay the plans, I would think that by overlaying let’s say, for example, you’re in a building its initial phases and you could overlay, “OK, this is what where the HVAC system is going to go.” and it’s like a one-to-one from the blueprints. And then when somebody using the scanning capabilities of a third party scanner — or even the Hololens itself — you could look for anomalies in the millimeter range. So if the HVAC system happens to be off by a couple millimeters, then maybe it’s auto-flagged. Is that something– is that kind of the idea with this?

Greg: I don’t know if we can get to auto-detection or not. That’s a good use case, or that has to be processed later, and then sort of like–

Alan: Or maybe you can annotate on it, let’s say the HVAC system’s here. But there’s the plumbing, it’s running through where the HVAC system supposed to be in. Because I know rework is a huge problem in the construction industry. It’s a multi billion dollar problem every year. You build a part of a building, you put your HVAC system and then go, “Oh, the plumbing is supposed to go where that is.” and rip it all out and start over again.

Greg: That, we do support. So we have this tracking of issues in the device, so we can status objects as installed or not installed or defective. And then we can also take a photo and drop a note, so that then becomes something from the field, they can go back.

Alan: Now, does that go back and alter the blueprints at all, or annotate the blueprints. I guess?

Greg: It doesn’t go back into the blueprints directly, it goes back into the Synchro 4D, you can say like this digital twin. So it goes back to the database as a event in the timeline. So in fact, you can kind of– you can basically scrub backwards and forwards through the model, and see status progression as it takes place. So that’s actually really another interesting use case we’ve developed, is this ability to status work as completed as this goes in. So it’s like model based tracking, and completions versus just like a daily log written down on a piece of paper. And then what that leads to, when you basically– when you get to a model based tracking solution, then now you’ve got basically a historical record of work that was actually completed, observed, captured, recorded. And if you think about that, you can compare that against the schedule. You can use that to start issuing payment basically for work complete. So like digital audit trail, you could say.

Alan: Yeah, that’s really amazing. When you’re making these digital twins– especially in a new building, I guess you’re pulling them directly from architectural renderings and that sort of thing. How– what’s the conversion? What is the process to create a digital twin of a space? Although I’m on your website now, it says, “The process combined engineering data, reality data and IOT data. 2) Create a virtual experience using 3D and 4D. 3) Gain a deeper understanding of infrastructure assets.”

Greg: Yeah.

Alan: But I’m assuming there’s more to it. [laughs]

Greg: That’s– I mean, basically it’s– we can import from basically — like I said before — any kind of 3D model system. That becomes– that’s the 3D asset import. We can also import any schedule information from P6 or Microsoft Project, link those together. Then on the back-end we basically wrote this Unity converter, that takes all that geometry that gets imported into our SYNCHRO engine, that digital twin aggregator. And then we can basically pull out that content and render that inside of Unity. So we basically wrote a pipeline to go from 3D asset into Unity, and then render that into the Hololens.

Alan: How many active products are using Hololens right now? Is this just something that you’re kind of working with internally, and then saying to Duke Energy or one of the companies, “Hey, you want to give this try with us?” and it’s kind of like a partnership for R&D? Or is this something that you’re rolling out as “Hey, we’ve got this product and it’s part of our workflow now.” And what does that look like?

Greg: It’s early adopters. So usually it was construction companies that had– luckily they had some budget and some innovation money to spend and work with us. So typically they’d buy one or two Hololens. And then we would go do basically co-development, hackathon style work with those customers, and we’d build basically working prototypes with specific customers. But then we would take all the learning from those prototypes and this was like companies like Skanska, Balfour Beatty, Mortensen in the US, Tesla also in the US, I mentioned Duke Energy. And so they would sort of funnel in to our development pipeline, and then eventually that kind of aggregated and got us to the point where we felt like we could actually put an app out onto the Microsoft Store. So that was cool. We wanted make it real, you know. So we have an app. It was published to the Microsoft Store and worked with Hololens 1. So I think that was kind of our roadmap, a lot of iterative prototyping, but eventually leading towards a product that anyone who had a Hololens could download and make use of. Again, I think that was that early adopter stage. I think that there is that smaller group of people that were buying Hololens 1 and willing to kind of help us evolve that experience. And I think as we go into Hololens 2, it’ll just expand. That’s at least the hope.

Alan: Even though it’s early days, are you seeing ROI being generated from this, or is this just a better way to visualize? How are you measuring baseline without this, to with it?

Greg: It always comes back to the challenge of 3D or 4D versus paper, not paper or digital, not digital, because–

Alan: How do you prove it to somebody? It’s anecdotal. [chuckles]

Greg: Well, it’s so expensive. Like, you don’t have the luxury of big construction projects to go, “Let’s make one analog, and then let’s do another digital, and see what happens.” Like it’s such an expensive proposition. So usually when a company goes in, they’re just sort of like all in and they’re just going digital and they’re building 3D models and they’re requiring their subcontractors to deliver fully detailed fabrication models. And they just go for it, they deliver that project. I think what I’ve heard from an ROI perspective is, when you’re using these tools is kind of this idea of BIM building, information, modeling. You see reductions in rework. More efficient productivity in the field, better communication, which reduces errors in the field, more confidence in the schedule and the program, because you can see it in a digital way first.

I think all those benefits just sort of cascade into the use of the Hololens and mixed reality. I haven’t seen where you would go to Hololens, if you don’t already have kind of investment and a belief in that kind of digital information process first. It’s sort of the investment in building the 3D models and sort of that changing culture around project delivery. So I think it’s going to naturally kind of evolve, as more construction companies and architects continue to develop 3D assets and kind of keep building that digital twin, then the Hololens is just a natural extension of that process. I don’t know if I– it’s like without investing in all that 3D model, then you can’t– there’s not a clear way to get into the Hololens in the use cases we’re looking at.

Alan: Are there still companies not using 3D when building infrastructure?

Greg: Yeah, you’d be surprised.

Alan: I would be surprised. Like, how is that possible in 2019?

Greg: Yeah, exactly. There’s still a lot of construction that happens in old analog ways. It might still be digital, like 2D, using AutoCAD or MicroStation. And then 3D, it’s still evolving, especially construction is still emerging.

Alan: Well, it’s amazing that we’re still at the very beginning of just the digital transformation, I guess. Here we are, on Hololens and 3D and moving into AR and VR and mixed reality, and some people are still using paper and cardboard models of things.

Greg: Yeah, yeah.

Alan: Wow.

Greg: So yeah. Well, that’s– there’s definitely tiers in the market, I would say. So like, the customers that are adopting the Hololens and all this 3D, they’re generally tier 1 major contractors building huge big infrastructure jobs. So if it’s a new airport or if it’s a new bridge or tunnel, probably good chance there is 3D models there and there’s digital process in place. As you start to scale down, if it’s smaller scale, like residential, single family housing, probably you see less use of the 3D models.

Alan: Yeah, it’s interesting. My brother rolled out his paper blueprints — they’re building a house — and they rolled them all out yesterday. And it was all 2D. Which was great, but it’s one house, and I get that now. I see it’s going to take a minute to trickle down to those use cases, because of the cost. Over the last kind of five years, let’s say, are you seeing a dramatic decrease in the costs to develop 3D versus 2D, or is it still kind of expensive or…?

Greg: Well, the software is not expensive. I think the costs concern like training and kind of large process change within the organization.

Alan: Hmm. Interesting.

Greg: So like the cost is more like, what’s it cost to basically train a staff, an entire division, or upskill everybody? If you think about this, the software itself is actually– hasn’t really changed price in years, like in 15, 20 years. I think that’s real costs, and then maybe fear that if you implement something new that it won’t work. But that makes sense. Like the cost is not the software.

Alan: I said this on another podcast and I think it’s true. It’s, “This is no longer a technology problem. This is an adoption problem.”

Greg: Exactly. It’s a cultural problem. And in fact, the question like your ROI questions are really good. I was running a workshop a couple — maybe it was like a month or two ago — and it was based on this idea called “questions are the answers,” based on this book I read by this MIT author. He uses this technique called” questions are the answers.” We try and get people to only formulate questions and not answers. And then the question I had, I was basically saying “What is preventing the adoption of XR and Hololens and 3D in the market or in your workplace?” And then the questions that people wrote — instead of answers, they had to come up with questions — then the questions that came to the top were “How do I prove this to management?”, “How do I capture the value and prove there’s ROI?” or “How do I convince management?” And it wasn’t like “Why is the technology not working?” or “How come this feature isn’t there?” It was all about– all the questions that came up were cultural. How do you prove ROI? How do you do present this to management to get total buy-in?

Alan: What are some of the things that you’ve found to help with that adoption? Because I think you have a unique insight into this, in that you’re actively working on this technology. You brought Duke Energy in, for example. How did that conversation go? Was it– did you find a champion within the company to back that or– what we’ve seen in doing a lot of these podcasts is that it comes down to having a champion within the organization, that’s high enough up that can kind of get buy-in from the C levels, because if you don’t have that buy-in, it ends up being a small POC and then dying.

Greg: I think that’s exactly right. I think I can validate that idea. Definitely sits. What I’m finding is when you find that champion, like the guy we had from Duke Energy, who was going to fly up to Boston in a week’s notice. And then he could go and– went back and got them to buy Hololens and implement it and drive that. So I’m finding is it’s– I think what’s helping with the adoption is to connect with like-minded, kind of spirited people who have this– they have this belief in it. And it’s like a partnership. Like, I don’t go out and spend a lot of time pitching it or trying to convince people of the value of it yet. It’s more like let’s connect with people in industry who also have this belief that there’s something there, and what can we discover together versus me trying to pitch it.

And so for me, it’s been a lot about that. It’s about finding people who are out there in the real world trying to do work. And if they think there’s something there, then I think there’s something there. And that starts a cycle. Even back to Microsoft, you know, they’re like the hardware provider. And then we’re building a software layer. Then there’s the actual real user. So I think that’s what’s been working. It’s like everything that we’ve done has been always with an actual customer and a use case in mind, and testing it in the real world. So it’s not abstract theoretical research. At least that’s how we’ve approached it.

Alan: Yeah, I think that’s the way we got to be looking at this. There’s obviously researchers doing great research on theoretical “Hey, what if…” scenarios. But I think practically wise, it’s “Yeah, what if we use this to save money?” [chuckles] “What if we use this to save lives and money?” That’s really the only question that matters at C suite. Amazing.

Greg: And that’s that challenge of like, how do you– I think this is an interesting software challenge I’d throw out there, maybe for anyone listening on the podcast. Because I don’t have an answer to that, it’s what I think I want to– would be fun to solve, which– it comes about that ROI question, right? Let’s say you’re using these great new tools, you’re building 3D models, you’re building 4D models. You’re going on the field and you’re solving problems. How do you easily, effectively, quickly capture that you’ve solved some kind of problem in a digital way without getting in the way? And then start to maybe capture that value somehow, right? Because I’ve seen a lot of cases where someone will be looking at a 4D model — which is the construction schedule animating — and a superintendent will go “Wait, that doesn’t make sense. That won’t work.” Or I’ve seen collisions of a huge piece of HVAC equipment, we’ll animate that thing coming into a building, and it will literally clash with the steel.

So imagine if if that piece of equipment had arrived on the site in real time and they lifted it with the hoist, it would have made a collision. And it’s like, you could have impacted the schedule by weeks. And that’s a lot of money, right? But how do you– whenever you’re simulating it it almost gets taken for granted. “Oh, it doesn’t work.” And you fix it, and you move on to the next task. And so it’s very easy to lose track of, I think, all those little moments where, well, maybe you did just save like a ton of money. I don’t know how you do it without some kind of quantum simulator, I don’t know. But if there is a way, that would be a really cool piece of tech to build.

Alan: Are you doing anything in the AI space as well?

Greg: Seems like everyone is, but practically I would say I haven’t got into that yet. There’s a lot of talk, lot of discussion, but we haven’t haven’t figured that one out yet. Obviously there’s a lot of information that is being collected, that we haven’t analyzed the results of all that. So that’s an interesting option, maybe. Maybe AI, if you started to compare almost like microchanges maybe people made in the 3D model, you could extract some value out of that.

Alan: Well, the thing is, you guys have access to enormous amounts of data. And when training AI algorithms to do whatever it is you you want to look for, the first thing is if you have the data and you guys have the data. So how do we apply this to maybe look in and determine, based on the different information that’s coming in? You could look at– is a supplier delay imminent or is there something on the worksite going to cause problems down the road? Really modeling out scenarios, I think, is the best use case of this technology — of AI anyways — modelling scenarios. I think that’s where it’s going to really become interesting, when you combine AI and an XR, when you can say, “hey, here’s the progression if we do this, and here’s the progression if we change this variable.” and the build times, build costs over a movable scale. So if you’ve got the Hololens on, you’re looking at a building construction site, and you grab the slider and you say “Day 2 is dig the hole, and day 150 is full buildings,” you got that slider. And based on changing different variables, then you can now look at that real time and say, “Hey, that 150 day completion time is now 200 days because of this, this and this.”

Greg: Yeah, definitely. All historical trends and construction data could be fit into the model and it could basically auto-generate a construction sequence for sure. And then the Hololens becomes just a way to kind of visually interrogate that and see it.

Alan: I’ve heard it mentioned that AI is the real driver of this technology and XR is really the visualization of the data. And it really it makes sense when you think of it that way, though. By the — I just read this morning — by 2030, there’ll be half a trillion sensors, IOT sensors around the world, in everything from your shoes to your light posts. How does a human, any human really make sense of that much data coming in? And the answer is you can’t. So how do you then apply smart algorithms to give you better data in a way that that us mere humans can understand?

Greg: I think that’s where things will definitely I think it’ll be interesting. Every time I’m in an Uber these days, I sort of am thinking about these cars, where they got like multiple smartphones pinned into the dashboard, then it’s just like the dashboard, and just like all of these kind of heads up ambient pieces of data feeding into the mind. And it’s like– it feels like that’s just– it’s this early hacked state, where eventually all those controls basically disappear, fade away. And they’re always maybe– it’s just on-demand information flowing into a pair of glasses or contact lenses, or whatever it ends up being. But essentially I feel like we’re already in this kind of augmented world, but it’s all kind of–

Alan: It’s ghetto.

Greg: Yeah, it’s ghetto, exactly. It’s super ghetto. Everything’s like hacked together, you know, it’s like–

Alan: Yeah, you’ve got like– the same thing is with like Uber Eats. You go to a restaurant and they’ve got like six iPads. It’s like “Really, you need six iPads to take orders?” Like, this is ridiculous. There’s cables everywhere, and it’s like, oh my god. I think you’re absolutely right. We went from brick phones — you know, those big-ass brick phones with a bag — to a tiny phone, to now smart phones. And that process took — I don’t know — twenty years, I guess? We’re kind of in brick phone phase of this technology, like in 10 years from now you’re going to look at the Hololens one and laugh at it and be like, Oh my god, I can’t believe we used to wear that ridiculous thing on our heads. And you’re going to look at the HTC Vive, because I have one of the Vive Pres, like the very first one. And I mean, like all the sensors are showing, and it’s a giant facemask looking scuba thing.

We’re gonna miniaturise that. And probably what it’ll do is AR and VR will probably just merge, and you’ll have a pair of glasses that has full 200 degrees of field of view and full occlusion. When you’re in VR mode, it just kind of darkens out the world and goes into VR mode. And in AR mode, it just lightens up the glasses and you can see the world around you. But like you said, and to quote, I believe it’s Sundar Pichai from Google, “The device, the very idea of the device is going to fade away.” So you’re not going to just hold up your phone every time you want check a message, it just will be intuitive.

It’s really interesting that Microsoft has spent so much time on the user interactions, being able to use your hands naturally. And I think we’re just at the beginning and you know, somebody said to me, how are we going to communicate with these devices? And I was like, I don’t know, man, maybe we’ll talk through it, maybe a wink at it. Maybe we’ll just think it. But the reality is, nobody really knows right now. And that’s– I think the excitement of it is that we’re at the precipice of the next computing platform and nobody really knows how to use it to its full potential, even close. So there’s so much opportunity here and so many problems and challenges within the industry to solve. So like, how do you solve the security issue when– you know, when somebody puts on your headset, can they start to be your avatar in a virtual world?

Greg: Yeah. That’s interesting. I think it’s– That is, I think, a really good point. It’s like such a wide open space. But that’s what makes it so interesting and so fun. And it’s also a space that’s been socialized, honestly, in like sci-fi films for a long time. Usually you seen it a film and there’s always holograms, but no one’s wearing anything on their head. You know, it’s just– there’s just a magical hologram. [laughs].

Alan: [laughs] It’s funny, one of our investors said, “When are we going to have holograms floating in the air?” I’m like, “Well, you know, we can do that now.” He goes, “No, without the glasses.” I’m like, “Uhhh… maybe never. You need these glasses.”

Greg: But it’s funny. It’s like this idea has been with the culture for a long time. You know, I always think back to like — I think it was Prometheus or something — where that drone is going. For me, I think about that, it’s like this great digital twin example of like drones flying through the alien cavern, doing real-time basically laser scanning and then producing a real-time holographic representation back in the command center. Like, that to me is super cool, like it was imagined in science fiction. But we’re actually getting to the point where that could almost be real, except you do have to wear the headset.

Alan: There’s nothing wrong with the headsets. That’s where everybody’s trying to get to this point where we don’t need headsets. But I think that’s really not reasonable.

Greg: No. And I’ll tell you what, too. With the Hololens 2 — I don’t know if you’ve experienced this yet, but I would love to show it to you if we ever get the chance — it supports multi-user, just like any other multi-user game. So when we when we did this launch in Barcelona, we had multiple– we had like four– I think it like 20 Hololens at some point all in session. We had to build an app even just to manage which devices were in and out of session, because everyone was sharing the same information at the same time. So, you know, we had three people in a room, and with the hand interaction it’s super cool. Everyone’s reaching in, and you can literally pick up holograms and pass them back and forth to one other. And they have this sense of like it’s really there. And if people missed the grab, it drops down, because we turned on gravity. It’s like, “Oh!” and you watch them look down, like they actually dropped an object and pick it back up. And the cool thing is, it’s like it’s shared experience. And I think that’s another cool thing with the Hololens. It’s like you’re still seeing your surroundings, you’re seeing the other people in the room, you’re all seeing the hologram at the same time. And it becomes this believable thing. You know, it’s literally changing your definition of reality.

Alan: My favorite is when people are in VR/AR and they they walk around digital objects, they move out of the way of something they could just walk through. [laughs]

Greg: Yeah, we saw people afraid to squeeze on the tower crane. Like, “No, no! Grab it, grab it!” And then they kind of hesitate.

Alan: “Am I going to break it?”

Greg: Yeah, yeah.

Alan: So I can ask one last question, Greg, because we’re running late here a bit. But what problem in the world do you want to see solved using XR technologies?

Greg: Hmm. I would like to just make information — this digital space that we live in — just become basically hands-free. I would like to see that evolution out of these phones people carry around, and just deliver information on time where you want it, when you need it. And in specifically in construction, it’s sort of like, just get all these digital assets to people in the most natural way possible. So that’s what I see as kind of like an evolution of how people understand buildings, architecture and sort of information in general. So I think I’m just along for that ride, evolving our understanding of a problem.

Alan: Yeah, I think there’s– being able to visualize things in different ways is really powerful. And I think these technologies really are going to unleash the human potential.

Greg: I hope so. Definitely.

Looking for more insights on XR and the future of business? Subscribe to our podcast on iTunes, Google Play, or Spotify. You can also follow us on Twitter @XRforBusiness and connect with Alan on LinkedIn.

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top