Brought to you by MetaVRse

Prepping for Virtual Surgery, with 8Chili’s Aravind Upadhyaya

Of all the jobs that are difficult to train for, surgery is especially challenging, what with needing a body and all. But our guest Aravind Upadhyaya is working to make surgical training virtual, with the help of XR technologies.

Alan: Hey, everyone, Alan Smithson, host of the XR for Business podcast. Today, we’re speaking with Aravind Upadhyaya, co-founder and CEO of 8chili, an Oakland, California based startup, bringing the dream of telesurgery to the real world. Today, we’re going to talk about how virtual reality is improving the outlook for remote surgeries globally. All that and more, coming up next on the XR for Business podcast.

I want to welcome Aravind, thank you so much for joining me on the call.

Aravind: Thanks, Alan. It’s a real pleasure to be on your podcast, and to get to know you, as well.

Alan: It’s such an honor. You guys are working on something that is a true game changer for surgeries and remote telemedicine. Maybe just tell us what you’re doing and the problem you’re solving.

Aravind: Definitely. So, just to do a quick intro: I’m a technologist, so I’m an electronics engineer. I’ve spent the last two decades leading R&D projects in mixed reality, computer vision, Internet of Things, and robotics at one of the biggest, largest conglomerates, the Tata Group. And in 2016, I had my Tony Stark moment when I had the Hololens. It kind of felt like I could hold the power of x-rays in my eyes. And that’s when the journey started for 8chili. So we’ve been working on this technology with surgeons for the last two years, kind of in stealth mode. And this year, GPM, we finally kicked it off. In a nutshell, we wanted to reimagine surgeries with mixed reality, by building a remote collaboration platform. I want to start with this: 60 percent of new residents universally are not confident to perform core procedures. And just about 3 percent of the surgeons globally have access to high quality continuous training. And why is that? And that’s because we have a very legacy training system. So, there’s a great adage that goes like, “I hear and I forget. I see. And I remember. I do and I understand.” And that’s why we are building NAVIX AIR, because experience cannot be explained. So you have to experience to really get to be an expert. And NAVIX AIR allows residents to experience what a surgeon does in a surgery. So this, what we want to do, is take the platform to a very immersive collaborative experience. Now, NAVIX AIR is a platform that allows residents to transform into the surgeons eyes and follow their steps simultaneously in the virtual world, without disturbing the surgeon. So let’s say you can fail and repeat as many times as you want, even post the live surgery. And the big difference is being able to see what the surgeon sees brings this feel of reality into the residence. And that’s what is missing in the cadavers or the existing VR simulators. Because what happens is no surgery goes smoothly or perfectly. There are complications, there are surprises that happen, like another surgeons encounter. And these cannot be captured. So it’s the surgeons experience that come into play when something is not going as they expect it to go. And this feel of how to react in such a situation– let’s say, what does a surgeon do? What is the communication that they used to talk to the nurse, the anesthesiologist, or the other juniors who are helping the surgeon? Or what kind of technique? Like, if there is a bleed which was unexpected, how do they go about tackling that? So all these things are scenarios that will be happening in the real world. So there’s no substitute for it.

Alan: That makes a lot of sense. So when a surgeon is operating, when these things pop up, it’s almost impossible to train for every scenario, but you can record it.

Aravind: Absolutely. So we allow them to log in,, where it’s saved as a portal. And if you’re a surgeon, Alan, let’s say you put on our headband and it’s nonintrusive, so it allows you to do your standard procedure the way you’ve been doing it all this time. And I, as a resident, can then see what your eyes see in full 3D. We transfer the entire 3D scene that you’re seeing, in close to real time. Let’s say you have a half a second delay right now, which we are trying to reduce it to like almost near zero latency. But even with the half a second delay, it’s unnoticeable for a junior. And then I can actually see what you are doing and repeat the same steps that you are doing simultaneously with the virtual tools I have.

Alan: That’s incredible. So over the course of a surgical resident’s time, how much time would they actually get normally to stand over and watch surgeries?

Aravind: So we have a proxy research that goes like, 40 percent of the programs right now report fragmented training, and due to restrictions that impact skill acquisition. So that’s one of the leading issues, as in why the residents are not confident in performing core procedures. So they would like to put in– every resident start doing practical trials by the second year and — let’s say if you take neurosurgery, for example — they do five years of residency and then two years of specialization in neurosurgery. So they have to put in at least more than 100 to 200 cases, 200 cases is the minimum that they would need to become an expert in a specific neurosurgical speciality. And that’s to less, that’s the minimum. But with our technology, we could enable them to learn not just from the surgeons that are in the university or in the teaching hospital, but they could to learn from expert surgeons all over the world, and any time they want to, as well.

Alan: If you think about it, we put pilots through thousands of hours of practice before we let them fly a plane with a bunch of people in it. But surgeons, it’s in the hundreds of hours, because of complexity. You’re allowing surgeons to have the same kind of level of training as pilots. I think I want the surgeon who’s had thousands of hours of training, versus hundreds. That’s just me personally.

Aravind: Absolutely, absolutely. I mean, this industry is so neglected, like the surgical training industry. And we have great tools for everything else, like navigation, and robotics, and everything. But if you look at training, it’s still so archaic, you have– if you stand in the operating room, a resident stands in the operating room, there is really no space next to the surgeon, because it’s a life case. And the surgeon is in charge, so he’s the captain. And there’s a nurse who’s next to the surgeon, and then there are tools. So there is literally no space. And the surgeon is not going to be like, “Hey, you know what, come and stand and see how I see things.” No, it doesn’t happen. So it’s part of learning most of the time when they go to cadavers. Again, you don’t really get to have the experience of the variety. That’s the use case that you’re trying to target. When we do the more surgical training — and especially in life cases — we allow our platform to be tested rigorously, and that will allow us to then port it to the bigger concept of remote surgeries itself.

Alan: So the ultimate goal is remote surgeries altogether.

Aravind: Yeah, absolutely.

Alan: This is where the telcos go, “Ooh, 5G! We need 5G in there, to reduce the latency!” Because latency would be the biggest problem with remote surgeries that I can see. [chuckles] A couple of milliseconds difference, and you’re a millimeter to the left or a millimeter to the right.

Aravind: Absolutely. I mean, it’s our biggest bottleneck, I would say, to realize the dream of remote surgeries. And I think probably for any other company who’s working on remote collaboration, I think this is the biggest bottleneck right now. Everything else could be worked around, but not this.

Alan: Well, the good thing is there’s a lot of people working on the space, so let me ask you, with what you’re working on now, where in the life cycle of the product are you guys now? Are you in trials? Are you in market? Where are you now?

Aravind: So currently we are in trials. So we have used on Phantoms, we have worked with Phantom so far, because a part of our surgical training also includes surgical navigation, I should say. I would share a video, post this cast, that you would see what I’m really talking about. So this is what happens with the existing VR simulators, you’re able to use the tools that are available — for example, orthopedic surgery, like this total knee replacement — then you have all the tools, you have on the assets and everything. And then you go about performing step by step what a surgeon usually does. Now, in our case, because we are doing a live transmission, so you’re able to see what the surgeon is doing. Most of the patient is draped, so you’re able to see what area the surgeon is working on. All good. But that also means the part wherein the surgeon has access to, let’s say, in his guidance tools, we won’t be able to show that on a video, because if the surgeon is looking at a screen and is doing navigation, that’s not going to be really that effective for the resident. So what we do is, along with– there’s one extra step before starting the surgery, the surgeon has to do a call registration, basically a process whereby the preoperative scans of the patient are registered with the patient organ. And what this is allowing us to do is the surgeon won’t be using it, the one who is performing the surgery. He’s going to it in the conventional way — they’re going to do the surgery. But we going to use that data to superimpose these virtual assets, like what others call it. In our case, basically, let’s say it’s a neurosurgery, and if I have a scan — CT scan — I can extract the skull of the patient. I can extract the brain of the patient from the MRI. I can extract, let’s say the grey matter, white matter and the DDI fibers, the tract of the visualization, all of this data which can be superimposed onto the patient. So even though the resident is seeing whatever the surgeon is seeing, he has this extra powerful feature that he can now instantly overlay onto that patient in the super accuracy. And then what happens is, he has control, so they could use the opacity and then see in detail anatomy, like what is the approach the surgeon is following? What nerves the surgeon is avoiding? That’s the kind of detail that we are talking about. So we create like virtual assets on the fly by taking the patient scans. So we’ve done that part, we’ve tried that surgeons. We have deployment coming up with our adviser in immersive, Dr. Anthony Avelino. He’s the provost at University of Michigan Hospital. So we’re getting this deployed. The COVID is delaying stuff for us, but I think it is still on track to deploy by October.

Alan: It’s amazing that you said University of Michigan, because there’s a lot of work going on at the University of Michigan. Jeremy Nelson is kind of spearheading the university’s mixed reality center. So they’ve got over 20 Hololenses there. They’ve been working in mixed reality for a few years now. I know everybody’s really excited about that. We actually took a tour of their virtual reality lab. And it’s interesting how a university took on this idea of mixed reality and then did a challenge to all the different divisions of the school and said, “Hey, how do you think you could use this technology? Come up with some ideas and we’ll build it for you.” That is a really interesting use case, and the fact that you guys are working with them is wonderful. So I am assuming you’re going to be on Jeremy’s podcast at sometime soon.

Aravind: [laughs] Well, I hope so, yeah.

Alan: You’re speaking to an audience here on this podcast. What is something that you would want to tell them to check out? Who are you looking for, as far as customers, and how do they get in touch with you?

Aravind: They can get in touch with me via my email at hello@8chili.com. We have a dedicated team that looks at all emails and get back. So we’re happy to do that.

Alan: Just so everybody knows, it’s 8chili.com.

Aravind: Yeah. Thank you.

Alan: Where did you where did you come up with a name? I got to ask.

Aravind: [laughs] Thanks, I was waiting for you to ask that question. [laughs] So we were looking for a name that would not have a conflict of interest with any other company that is doing augmentation or something like that. So we failed for the first 24 hours. And then the team was hungry. We ordered like super hot Indian spicy pizzas. And out of joke I was like, “Hey, you know what? We are trying to spice up this navigation industry, right? So we should call ourselves the Chili Company.” So the team goes like, “That’s awesome! We should go for it!” [laughs] So 8 is basically Infinity, and we a have more Marvel inspired team, so our logo is coming from Deadpool, we wanted it to be “two eyes” kind of thing. So that’s the whole story with the name.

Alan: It’s one of those things that you got to ask. First of all, is there anything else you think people should know about 8chili before we move on?

Aravind: Yes. Anyone who would contact us, we would like to show them a demo of how the concept looks and we would be ready for deployment by October. And we would be very happy to come over to any location in the world, where we could fly — if there’s no COVID restrictions — or virtually also we could still show them the demo. And I suggest everyone to at least get in touch with us to see the demos, because it’s really going to change the perspective of surgical training. Anyone who want collaboration.

Alan: So how can people see the demo if they don’t have a device? So can they see it if they have a Quest or…?

Aravind: Yeah, we could– we have a beacon host. We are actually hosting an app on the Oculus Quest. So if they have a VR device, they could likely go and experience it.

Alan: Great.

Aravind: Yeah. So we are trying to do a closed demo, rather than hosting it. But we are happy to do that, so that if customers want to experience what it feels like being in a 3D environment, then we’re very happy to set it up for them.

Alan: So I have to warn people, if you’re squeamish, am I going to be looking into people’s innards?

Aravind: Yes, the demo will be– mostly we will do it with a phantom, so that they don’t get that freaky about it. So we have a phantom that we ordered from Amazon, it’s a skull. If they are able to see a skull, they should be OK. But real surgeons, yes, there’s a lot of blood. If you get dizzy, nausea, and stuff like that, yeah. [chuckles]

Alan: I think this is also a great way for young people to try early on. Maybe this is a good way to get young people excited about medicine, and alternatively people that thought they were excited about medicine, when they try this, maybe go, “OK, that’s not for me. I don’t like that.” So I think it’s a great way to encourage young people to explore medicine as well and see the future of medicine in a way that is fairly scalable.

Aravind: It’s a super advice, I would say. I’ll take that up, as one of the things that we would like to do. Because I had not thought of that, inspiring the next generation to become surgeons. But, yeah, I think it’s a very good space to look at. Maybe make it a bit simpler, so that they could still experience the 3D perspective and see that this is a career to explore. That’s a good space. Thanks, Alan.

Alan: This is one of the things that VR and AR do really well, is that giving people the ability to try a career, before they commit their entire lives to it. One of my friends has a company called Career Labs, and what they do is they create VR experiences, but they’re short, they’re maybe 15 minutes. And you get to try driving an excavator, or you get to try driving a dump truck, or you get to try driving a mining vehicle. And so you get to try these different things. Mostly his are in trades, but imagine having that and then other people can try surgery. Imagine having an entire collection of things that students could try, viscerally. Get in their VR and try, experience. Because we do need people in trades. We do need people to go into surgery. We need people to go into these. And how do you inspire people? What better way, than to put them into the experience? And I think it’s a really wonderful thing that you guys have created. And there’s a whole other market, just inspiring the next generation.

Aravind: Absolutely. One of the use cases that we are working on also is remote collaboration, really helps see space and defense. So imagine let’s say, you are sending someone to space. And if there is a complication, it’s so difficult to perform– you can only do first aid, but there can be times when a smaller intervention is definitely important and saves lives. So we could use our technology. And that’s what we’re trying to work on, to make it more easy for someone on the side of doing the surgery.

Alan: I would love to see how you guys figure out how to overcome the two second latency between something done on Earth, and between the time it happens on the moon. There’s a two second round trip latency. Two seconds! Two thousand milliseconds! [laughs]

Aravind: I know. Right now our latency…rather than recreating the entire scene with textures and all of those things– you know the point cloud, so what we do is we take the complete point cloud and transfer the point cloud over edge protocols. And that allows us to really keep the RGDD data as well.

Alan: Well, that data is easy. It’s when you lay textures over that it gets really complicated. So you guys are doing the right thing.

Aravind: Absolutely.

Alan: A lot of people in 3D are missing that; that if you have the 3D objects, you can– first of all, you can depolygonate, you can take out a lot of polygons, make them smaller by simple compression.

Aravind: Yeah.

Alan: But if you don’t layer them with textures, they end up becoming easy to deal with. It’s when you lay on textures and transparencies, then it gets really complicated and that’s when your devices start to bog down a bit. So you guys are doing it right.

Aravind: Rigging and meshing takes a lot of time and effort. So even if you move these processes to boot-up, it’ll still give us a delay. So we chose to directly work only the point cloud. And our mission is basically about taking the point cloud and making it look realistic as soon as possible. [laughs]

Alan: Yeah, it’s all about a bit of trickery, a little magic.

Aravind: It is, it is. [laughs]

Alan: Aravind, here’s my final question. What is the one problem in the world that you would like to see solved using XR technologies?

Aravind: I would like a general practitioner or someone with general surgery experience to be able to perform a surgery with collaboration from experts anywhere in the world, and deliver that surgical care irrespective of the location. Tier three cities, tier two cities. We want everyone to get that care, we want it to be democratic and universal. So that’s the problem that we want to like we saw.

Alan: Well, that sounds like a pretty damn good problem to solve, my friend. And I wish you all the best with that.

Aravind: Thank you, Alan. And we look forward to keeping you posted as and when we move closer to our mission.

Alan: I’m looking forward to it. And for those of you listening, you can subscribe to this channel so you don’t miss any of it. It’s xrforbusiness.io. You can subscribe on all of the channels. And I want to say, Aravind, thank you so, so much. Aravind Upadhyaya from 8chili, you can find them at 8chili.com. And thank you so much for listening to the XR for Business podcast with your host, myself, Alan Smithson. And we’ll see you on the next episode. Thanks, everyone.

Aravind: Thanks, Alan.

Looking for more insights on XR and the future of business? Subscribe to our podcast on iTunes, Google Play, or Spotify. You can also follow us on Twitter @XRforBusiness and connect with Alan on LinkedIn.

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top