Muscles and machines – Connor Glass on new approaches to prosthesis control
Shownotes
What happens when the boundary between humans and machines starts to dissolve? Dr. Connor Glass is the Founder and CEO of Phantom Neuro, a neurotechnology company building the next generation of human–machine interfaces. With a background in medicine and research at Johns Hopkins, Connor left clinical practice to tackle a deeper challenge: how humans can intuitively and seamlessly control the machines that extend their capabilities.
In this episode of Taste of Bionics, Connor talks to Ranga about developing Phantom X, a minimally invasive muscle-machine interface that enables intuitive control of prosthetic limbs and robotic devices, and how to make high-performance neural control accessible and scalable – not only for amputees, but also for broader applications in robotics and human augmentation.
Tune in to discover how technology, resilience, and curiosity can redefine what it means to be human and why Ottobock is at the forefront of enabling that future.
Editorial note: this episode was recorded in 2025.
Find out more about Ottobock: https://corporate.ottobock.com/en/home
Transkript anzeigen
00:00:00: So I think ten years from now, it will be uncommon, I hope, to see an amputee that's not using a robotic prosthetic limb, upper or lower.
00:00:07: I hope that we will see less people in wheelchairs and more people utilizing low-profile exoskeletons.
00:00:37: between humans and potential technological extensions.
00:00:42: And today we explore the frontier where medicine, engineering and AI converge.
00:00:49: And our guest is a physician, an innovator reshaping how people interact with prosthetic and robotic systems.
00:00:59: He's founder and CEO of Phantom Neuro, a company developing minimally invasive neural interfaces that allow people to control prosthetics with remarkable precision.
00:01:14: Conaglass is not only a medical doctor with research roots at Johns Hopkins University, but also a trailblazer whose company recently secured FDA breakthrough device designation and a major investment round by Otto Bock.
00:01:34: Conor, welcome.
00:01:36: It's a pleasure to have you on our show.
00:01:39: First of all, you are in Austin.
00:01:41: I am sitting in Germany.
00:01:45: You are a doctor.
00:01:48: Well, you worked in medicine and I want to start always with sort of your personal story.
00:01:56: You started, well, as a student at Johns Hopkins, correct?
00:02:01: Well,
00:02:01: so I started actually as a medical student in Oklahoma.
00:02:04: where I was raised.
00:02:05: So I was going through medical school and thought that I wanted to be a reconstructive surgeon that merges humans in machines in order to give people their function back, their lives back after an injury like an amputation or nerve injury.
00:02:20: And so that's what led me to go do a research fellowship at Johns Hopkins in plastic surgery focused on human machine integration with the limbs.
00:02:27: And that's where I got exposed to technology and what people have been trying in the lab, what people have been trying commercially, and then decided for multiple reasons to kind of do a career pivot, start phantom, focus on technology development rather than residency and clinical practice, and have been doing that since twenty-twenty.
00:02:47: But
00:02:48: what was your motivation to sort of get into that direction?
00:02:52: Do you, I don't know, have a personal story, friends who lost their leg or something like that?
00:02:57: I don't know.
00:02:58: Yeah,
00:02:58: so it's not so much on the amputation side.
00:03:03: First of all, I like immediate results with things.
00:03:06: I like, you know, to have a significant...
00:03:08: Me all.
00:03:09: Very
00:03:09: quickly.
00:03:10: And that was one of the things that originally drew me to surgery was that you could have somebody with some significant disability or some life-threatening ailment.
00:03:18: You could do surgery and then their life is radically redefined on the other side of that operation.
00:03:24: And to me, that's an incredibly rewarding thing.
00:03:27: And so that's kind of what set my sights on surgery.
00:03:31: Now, I started off in undergraduate studies being an ROTC.
00:03:37: So I actually wanted to do special forces and all of that with my life and was super passionate about that.
00:03:43: So I started off wanting to be in the military.
00:03:47: And then, you know, as part of military training, you have to do a lot of physical activities and I wanted to do special operations.
00:03:53: So I really tried hard to excel at all of that.
00:03:56: And I kept getting stress fractures in my legs.
00:04:01: which would cause me to not be able to do the physical activities and all of that.
00:04:05: And so that gave me kind of an appreciation for limb disability in a much more minor way compared to like limb amputation, but certainly your limbs not working the way that you want them to and preventing you from doing what you want to do.
00:04:18: And so I then recognized that I wouldn't be able to do special operations and all of that with these legs that keep having these recurring injuries.
00:04:27: and that I had to figure out what was next if that wasn't going to be an option for me.
00:04:32: And the only other thing that I was interested in was surgery.
00:04:34: And so I switched over to pre-med and then really honed in on.
00:04:39: what ultimately became reconstructive surgery and this passion for wanting to help people that have different types of disabilities.
00:04:45: And this sort of area of reconstructive surgery is naturally, first of all, something very technical.
00:04:52: So it's not only biology, medicine, but it's really, I mean, if you go into an operation room, you can see, you know, tools.
00:05:02: It's a bit like in my workshop, I have sometimes the feeling.
00:05:06: In fact... The actual research that I did at Johns Hopkins was focused on microsurgery.
00:05:10: So everything done under a microscope and really fine neuromuscular kind of reconstructive techniques.
00:05:18: That's really what I was focused on within a laboratory environment.
00:05:22: So yeah, it's deeply scientific, which is extremely exciting.
00:05:25: Well, it's also very complex.
00:05:27: I mean, I'm not a specialist, but I believe we have just over thirty muscles just in the hand and more muscles interconnected.
00:05:37: So this is highly, highly complex.
00:05:42: And as a surgeon, you naturally know that.
00:05:45: But now comes the step where you say, OK, One part is biology, but enter is technology, and we can sort of, you know, compensate things.
00:05:57: What was that moment?
00:05:59: What brought you into that?
00:06:01: On one hand, it's incredibly complex, and on the other hand, it's really not that complicated.
00:06:07: And so I'm a big believer in the cliche term of first principles thinking.
00:06:12: It's a way overused term, but at the end of the day, it drives pretty much... my entire thought process, which is how do you simplify the problem as much as possible and then approach it from that lens rather than what many people do in academia, which is what's like the most complicated, deeply scientific, rigorous thing that's possible.
00:06:32: That usually leads to like a really cool journal publication that nobody ever reads, whereas thinking from first principles leads to, ideally, a commercial product that a lot of people can use.
00:06:46: Really, it was honing in on looking at the different approaches to human machine integration that all rely on sensors of some kind interfacing with the human body, looking at what worked, what didn't work in the literature, what was most promising in the literature, both from like a technological perspective, but also an anatomical and psychological perspective as well.
00:07:08: And then trying to design a product based around that.
00:07:11: That's really the best of all worlds.
00:07:13: And that's really where we landed with this implant.
00:07:16: that's an under the skin subcutaneous implant for the limbs rather than something like a Neuralink brain computer implant in the brain.
00:07:25: Yeah, that's interesting because naturally there's a lot of almost science fiction talk going on about brain implants, about controlling all sorts of things with your brain.
00:07:38: At least in my view, this is still sort of far future, but you are very pragmatic and you say, okay, we don't have to get into the brain, but we can take some connectors, some nerves
00:07:52: and
00:07:53: use them.
00:07:53: Now, first of all, in prosthetics, you do have some people who use external signals.
00:08:04: So they don't need an implant.
00:08:07: What is the big advantage of an implant compared to this non-invasive approach?
00:08:13: So it depends on what your goal is with the patient outcome ultimately.
00:08:17: So let's say you have a robotic prosthesis today.
00:08:21: The most common situation is that somebody has two sensors in order to control all the different functionalities of the prosthesis.
00:08:28: A sensor can be equated to an on-off switch, essentially.
00:08:32: So imagine you have thirty different movements that you want a prosthesis to do, and you only have two on-off switches to control all of those.
00:08:40: There's a huge mismatch there, which means that you have an incredibly inefficient low throughput control system for what is otherwise an extremely complicated and amazing robotic system.
00:08:51: And so the outcomes with that kind of a setup, as you might imagine, are, leave a lot to be desired, we'll say, right?
00:08:59: you really don't necessarily need a robot to do the things that these patients are doing with robots.
00:09:06: You could use something far simpler like an open and closed body powered hook and have just about as good of an outcome.
00:09:12: And so therefore the payers say, why am I going to spend thirty, forty thousand dollars on this robotic thing that's going to do this when I could spend five or ten thousand dollars on this mechanical hook?
00:09:23: that's also just going to do this.
00:09:26: And usually, therefore, they elect to pay for the hook, the far simpler situation.
00:09:31: Now, the much less common situation, but more complex, is having a bunch of sensors built into the prosthesis socket that do the same thing as those surface sensors, the two sensors that we talked about previously.
00:09:45: And so you have more sensors in there in order to do more things.
00:09:50: The problem with surface sensors... meaning sensors that are worn on your skin rather than implanted, is that they're very unstable.
00:09:57: And all these different extraneous factors impact the signal quality, how stable that sensory is relative to the muscle that it's trying to record electrical activity from.
00:10:06: So it's like sweating and things like that, which, you know, these heart straps who measure the heartbeat and while sweating already there causes a problem.
00:10:19: So the signal, a stable signal was, I think so, your aim in a subtermal electron array to have signals which are reliable to start with.
00:10:33: That's right.
00:10:34: At the end of the day, The way I describe it is that all of these human machine interfaces, how well they work is dependent on two factors.
00:10:41: How big is the signal that you're detecting and how stable over time is the relationship between the sensor that's detecting that signal and the thing that's generating that signal, spatially, meaning how stable does it remain in the exact same location.
00:10:54: If those two things are really good, meaning you have a big signal and a really stable interface, then you're going to have a pretty good outcome.
00:11:01: If either of those things is not so good, a small signal or a sensor that's constantly moving around, you're not going to have a good system.
00:11:09: And so with wearable sensors, you have a small signal because you're pretty far away from the actual thing, you know, that's generating the electricity, the muscle in this case.
00:11:18: And you have a very unstable sensor muscle relationship because it's on the surface of the skin and you're sweating and shifting around and all of that.
00:11:25: So you have a very unstable system where in order to get it to work, like for example, what meta is trying to do with this wearable EMG armband to control kind of their metaglasses, you have to apply an unbelievable amount of data and a significant amount of compute in order to try to compensate for all of these not so good factors associated with that.
00:11:49: And it's still TBD if that's even possible to compensate for those things in a product that's actually out on the market.
00:11:54: So that's why people seek to go implantable is to get bigger signals and more stable relationships with the sensors.
00:12:00: If you speak about bigger signals, I mean, you speak about an array.
00:12:05: So is that just one signal or that a multitude of signals?
00:12:10: How can I imagine?
00:12:13: how does that work?
00:12:14: Yeah.
00:12:14: So for our system, a single implant has sixteen sensors on it.
00:12:20: And it's a long flexible array that has sixteen sensors along the surface of this kind of thin silicon array.
00:12:29: Now, our system is somewhat unique in that it's modular, meaning you can have one implant or you could have two implants or three or whatever you want.
00:12:38: And so an ideal scenario, you have two implants for our system, one that goes on the top of your residual limb or intact limb, injured limb, and the one that goes on the bottom.
00:12:47: And so you have a ring of thirty two sensors.
00:12:50: So multiple implants are synchronized to where as you use it, it might as well be one implant.
00:12:57: as far as the system is concerned.
00:12:58: So you have thirty two sensors that are recording data continuously directly from the muscles in our system.
00:13:05: So first of all you have a lot of data on one side and on the other side.
00:13:09: just imagine I would be a patient.
00:13:12: I want to... grab a cup of coffee.
00:13:17: Now, this is what I want to do, but how do you get from thirty two different signals, which are highly complex?
00:13:26: It's like a big orchestra into some action, which at the end means, okay, the whole system does what I really want to do.
00:13:36: Yeah.
00:13:37: So we rely on and as does do most people, what's called pattern recognition.
00:13:42: So you get all this really chaotic electrical activity.
00:13:45: You have thirty-two, imagine thirty-two lines down a computer screen with a bunch of chaos in each one of those lines.
00:13:52: Then you do a bunch of stuff to that data, meaning you clean it up, you filter out noise, you flip it over, do wild things to it.
00:13:59: And you get it to a way, or I guess form, where you can find patterns in those signals.
00:14:06: And so you tell somebody, okay, we're going to train your prosthesis to open and close the hand.
00:14:13: when you want to open and close your hand.
00:14:15: So what you're going to do is, so stepping back for a second, an important factor in all this is that amputees still have the neural pathways intact down to the muscles.
00:14:25: So from their experience, they are still able to do all of this and their muscles still move.
00:14:31: There are certain situations where this is not the case, but in general, we can generalize.
00:14:37: They're able to do all of this, except that they don't have a limb to actually execute those movements, but their muscles are moving trying to do that action.
00:14:45: And so when I open and close my hand, it looks very, very similar to an amputee who's lost their limb, let's say here, trying to open and close their hand.
00:14:53: And so we tell a patient, for example, to open and close their hand.
00:15:00: and generate electricity associated with that action.
00:15:03: And then we find patterns in it and say, okay, here are the patterns associated with open and closing your hand.
00:15:08: Whenever we see this pattern, that means that's what they're trying to do.
00:15:11: So we're going to tell the prosthesis to do that.
00:15:13: And then you scale it up to rotating the wrist and pointing the finger and all of that.
00:15:16: So you have a sort of a training phase or a learning or teaching phase.
00:15:22: So walk me through.
00:15:24: I mean, I'm... coming and visiting you physically.
00:15:29: And, well, so the first thing is I got the implant.
00:15:33: Now, the implant, first of all, is, I believe, not a serious big surgery operation, but it's, by what I read, quite fast, isn't it?
00:15:45: That's right.
00:15:46: So a caveat this with, we're implanting patients for the first time.
00:15:51: We have not implanted anybody yet, but we're implanting patients for the first time.
00:15:54: into this year, very beginning next year around there.
00:15:57: But so you can equate it to other procedures in there.
00:16:00: So when you imagine cosmetic plastic surgery, right?
00:16:04: Something that people line up for all day, every day.
00:16:06: There are many procedures associated with that.
00:16:09: Facelift, tummy tuck, breast augmentation.
00:16:13: Now you are speaking of a world you don't know.
00:16:16: Yeah,
00:16:17: that's true.
00:16:19: And so, you know, those are... procedures that are elective that people sign up for.
00:16:25: And in general, those require general anesthesia even in order to have those different types of procedures.
00:16:31: What we're talking about for our device is implanting a thin strip under the skin in a pocket right there on top of the muscle.
00:16:40: So that's an outpatient procedure, meaning you don't have to be in a formal operating hospital environment.
00:16:44: You don't need general anesthesia.
00:16:46: You can even just locally numb somebody up.
00:16:49: in order to do that.
00:16:50: For example, dermatologists do this all the time when they're doing, you know, different procedures for the skin.
00:16:57: And it's something like a thirty minute to probably a maximum of hour long procedure where you don't actually have to go to sleep in order to have that.
00:17:05: So it's incredibly simple and it's something that any surgeon is trained to do, and even non-surgeons, speaking of dermatologists, ER physicians, people like that, have the capability of doing it.
00:17:15: So it's very, very straightforward and much less invasive than cosmetic, classic surgery.
00:17:19: Okay, so I entered.
00:17:21: This was a short, well, small operation.
00:17:26: And at the end, well, first of all, what you implant is a very thin sort of array, which is also stable.
00:17:34: This will stay for a long time in my body.
00:17:36: So you're sure that, you know, not in three years, you say, OK, we have to exchange it.
00:17:43: And
00:17:43: we don't need to go into that.
00:17:44: But I think you might want to do that.
00:17:45: But yes, theoretically, it lasts, you know, eight, ten years, something like that.
00:17:49: OK, so we could update naturally.
00:17:52: But now, at the end, you sort of, how can I imagine you have a cable coming out with sixteen or thirty-two different signals, or how must I look at it?
00:18:06: So it's all wireless.
00:18:07: So you have a different strip with embedded electronics, a little disk of electronics embedded into it.
00:18:13: So it communicates with Bluetooth.
00:18:15: basically, from inside the body to outside the body.
00:18:18: So there are no wires coming out of the skin, no nothing.
00:18:21: It's a fully implanted small device that communicates wirelessly.
00:18:25: And so now what you functionally have is this wireless neural bridge from your point of injury into a machine.
00:18:33: You said it's highly complex, sixteen or depending on if you have two sensors, thirty two different signals.
00:18:43: So I need a very big rucksack of computing power in order to process this,
00:18:50: no?
00:18:50: No, so modern-day electronics are unbelievable.
00:18:55: I mean, it's crazy that humans have actually been able to build some of these microchips.
00:19:01: Now, we have taken an approach with our artificial intelligence to make it as lean as possible, meaning we don't have... a ton of layers of neural networks and things that would require a lot of compute and therefore a lot of power.
00:19:16: And for two reasons we've done that.
00:19:18: I guess three, one is you don't actually need to apply all that really complex compute to it, at least not from our perspective.
00:19:25: Two is you don't want it to have a significant power demand because, excuse me, you don't want to have to recharge this thing every two hours or something like that.
00:19:34: And three is latency.
00:19:36: You want to be extraordinarily fast so that from the user's perspective, it's instantaneous.
00:19:42: And so what that means is that you have to have a pretty lean system.
00:19:46: And so we actually have, I don't know the exact power requirement on top of my head, but... something that doesn't take that many hours away from just the inherent battery life of the prosthesis.
00:19:55: The prosthesis is more power hungry than our system.
00:19:59: And it doesn't require a crazy amount of compute.
00:20:00: So we have like a little edge compute device that we call the fusion port that gets embedded into the prosthesis socket.
00:20:07: It's about the size of a matchbox you can imagine.
00:20:10: And that's what actually does the compute.
00:20:11: So you have more or less these raw signals from the implant.
00:20:15: that via Bluetooth, talk to this fusion port, and then that runs the compute really quickly, and then that outputs the command to the prosthesis
00:20:23: all
00:20:23: faster than the blink of an eye, really, really quickly.
00:20:27: It's so fascinating.
00:20:27: You know why?
00:20:28: Because in the nineteen nineties, I met John Hopfield, who got the Nobel Prize last year.
00:20:37: and Terin Zenyovsky and they were, you know, this was the sort of big first wave of neural networks.
00:20:46: and they were talking about that and they had the example, you know, you could simulate a hand and see all the different muscles, so it remembered me of a concert.
00:21:01: And what I found quite interesting is I learned that then that this is over defined.
00:21:09: What do I mean by that?
00:21:11: If I have a cup, I can grab it like this.
00:21:15: I can grab it like this.
00:21:16: I can, you know, there are thirty nine ways or even more how to grab a cup.
00:21:22: So this is something extraordinary.
00:21:27: which on one side gives us this huge flexibility in normal life, you know, taking things and moving things.
00:21:35: But this required a lot of learning skills or computing power.
00:21:42: In order to well have this action of grab a cup and I could grab it like this or like this or like this.
00:21:50: How do you go about this aspect of well different ways for the same action?
00:21:57: Yeah, so What we care about is what the hand is doing not necessarily what it is Functionally that they're trying to do with the hand.
00:22:09: So we don't care that they're picking up a cup of coffee.
00:22:12: We care that they're trying to do this, or this, or this, or whatever that is.
00:22:19: That's where we focus on.
00:22:22: And there are two approaches, more or less, to how you can go about decoding these things.
00:22:31: The simpler way to do it, and the way that we're currently, our first version of our system is doing it, is direct gesture-based decoding.
00:22:39: where you're either doing this or you're doing this, you know, they're discrete gestures that you have to provide the artificial intelligence network with data relating to that specific gesture so that it can find the patterns in that and decode it, right?
00:22:58: So it's very much like a one-to-one.
00:23:01: The way that humans actually move their limbs is more what's called simultaneous proportional control, which is free-moving, fluidic, simultaneous wrist plus this, whereas in a gesture-based control system, if you wanted to rotate your wrist and open and close your hand, you would first rotate the wrist and then open and close the hand.
00:23:22: It would happen quickly, but it would be step-wise.
00:23:24: Simultaneous, you could do both at the exact same time.
00:23:27: And that's more complex.
00:23:28: But it's perhaps for the future, who knows.
00:23:32: So, I now have your implant and... This, I believe, is very individual.
00:23:39: So you have patterns which may be quite different from one person to the other.
00:23:44: So I need to have some training.
00:23:47: How is that?
00:23:48: You show me pictures or you give me a command and say, OK, open and close your hand.
00:23:53: And then you record the signals and sort of know, OK, this is the pattern for Ranga opening and closing his hand or moving his wrist.
00:24:04: Is that the way it works?
00:24:07: So there's a companion app that goes along with it, meaning a mobile application on your phone.
00:24:12: And it plays you videos that you follow along with.
00:24:15: And so you can think of it as an avatar where you follow along with this person doing these different movements.
00:24:19: And there's, you know, verbal cues associated with it as well.
00:24:23: So again, they'll say the app will say something like, OK, we're going to train, you know, closing your hand.
00:24:28: Let's really simplify it.
00:24:30: On the count of three, we're going to start.
00:24:32: and you're going to close your hand and you're going to follow along with this avatar and just do what it does with your fandom pinnage.
00:24:38: And so then the system in the background is recording all the data time stamped and knowing exactly when everything is occurring.
00:24:44: And then it runs compute and it sets parameters with an algorithm associated with that specific movement.
00:24:50: Then it'll say, OK, next we're going to do open hand.
00:24:53: So on the count of three, follow along, so on and so forth.
00:24:56: And so.
00:24:58: It can be an arduous process in order to go through that.
00:25:02: And this is another benefit of an implantable system.
00:25:04: With a wearable system, when the signals are constantly changing, you have to recalibrate the system, which means you've got to go through that whole process again.
00:25:11: of like, follow along with me and we'll do that.
00:25:14: With an implantable system, if everything's remaining the same for forever, then you don't have to constantly recalibrate the system, which can be a really annoying thing and a barrier to utilization.
00:25:24: So anyways, that's how we go about it.
00:25:27: So this is the the training part and as I understood this is you do it once and You don't have to update or perhaps you have to update if you realize.
00:25:38: okay.
00:25:39: I can still open this in a different way.
00:25:42: How many different actions would you train?
00:25:45: I mean this is opening closing the hand.
00:25:48: This is
00:25:49: I don't know great question So it's up to the user.
00:25:51: So there will be something like thirty ish, available gestures, movements of the hand that you can feel free to calibrate.
00:25:59: Now, the literature says that some like eight or nine different gestures actually account for, you know, eighty, ninety percent of somebody's daily activities.
00:26:08: You can think the majority of what we do is open and close our hand.
00:26:10: And that's why these hook prostheses work fairly well is because it does account for most of that.
00:26:16: And so a user will be able to select what movements it is that they want to do.
00:26:19: So they could have different programs.
00:26:22: One could be for work.
00:26:23: where there's maybe four different movements that they want to work perfectly reliably, so therefore they want to limit it to just those four.
00:26:30: And then one could be for free roaming around the house in daily life that has more gestures incorporated into it.
00:26:37: So it's totally up to the user.
00:26:38: Whereas current systems are limited typically to just four different movements that you have to mode switch between those different ones, we can directly control really as many as the person wants.
00:26:52: This in future is going to give us a much broader amount of degree of freedom to work with.
00:27:00: Now, naturally, this is an implant.
00:27:03: So you have the FDA coming in and so on.
00:27:06: And there was good news because you have a special status where for transformative tech, you are allowed to speed up.
00:27:16: Can you tell us a bit on this procedure?
00:27:20: Naturally, many people will be listening and saying, okay, can I order this on Amazon?
00:27:26: But there is still research necessary, huh?
00:27:29: Right, absolutely.
00:27:31: So we got a designation with the FDA, it's called Breakthrough Designation, intended for technologies that can have a transformative impact on society, and therefore there's a motivation to get it.
00:27:42: out to market as quickly as possible and to ensure that it's successful from the FDA's perspective.
00:27:48: And so that gives us special access essentially to the FDA where we have much shorter turnaround times for submissions to the FDA.
00:27:55: We talk to the FDA at least once a month via this program and just have a lot of things available to us that otherwise wouldn't be possible.
00:28:05: And so you have to have, you know, clinical data, preclinical data and certain things to really tell the FDA, hey, You should believe that this is going to work based off the data that we have collected and the background literature that's supporting it.
00:28:18: in order to get this you can't just have an idea and then show up and say this sounds really nice go ahead and give us breakthrough.
00:28:25: And so then that expedites us getting to market which will be another you know two ish years before we're actually on the market being sold as in a marketed device.
00:28:34: with FDA.
00:28:35: What are the biggest challenges which you will face in the next weeks, months?
00:28:45: Can you talk about that?
00:28:46: Yeah,
00:28:47: absolutely.
00:28:48: So I mean, we're really gearing up for our first clinical trial.
00:28:50: The technology is more or less complete, at least in terms of what's necessary for the first clinical trial and what we want to have done.
00:28:58: And so now it's about some of that preclinical testing.
00:29:02: things like biocompatibility, mechanical robustness, all the checkmarks that you have to do in order to get approval to do a clinical trial.
00:29:12: It's very complex to set up a clinical trial.
00:29:15: We're actually doing our first clinical trial in Australia for a whole host of reasons.
00:29:18: And so trying to do a clinical trial in another country adds an additional layer of complexity to that.
00:29:26: And then ensuring that we have outcome metrics.
00:29:30: That are in line with what the FDA wants to see as well as with what payers the people that are actually at the end of the day going to pay for this Want to see as best as you can determine.
00:29:40: and so there's all these different stakeholders that you have to take into consideration what they want for your device and incorporate that into your clinical trial.
00:29:47: and so The biggest challenge is putting all that together getting approval for the study and then actually executing the study.
00:29:56: you know, the biggest milestone for companies like ours and Neuralink and others is your first patient.
00:30:01: Because that's really, before that, it's theory.
00:30:03: After that, it's, okay, proven reality.
00:30:05: I
00:30:06: can imagine that you receive many, many mails from people around the world who say, hey, let me be your first patient because I think so.
00:30:17: We have to recall that for many people, I mean, I spoke to people who were disabled and suddenly using a bit of technology, they were able, for example, to eat by themselves.
00:30:30: And this was psychologically something fantastic because at last they feel a new liberty, a new freedom to do things.
00:30:40: So I could imagine that you have a long waiting list.
00:30:44: Yes, we get a lot of inbound people that are interested in our technology.
00:30:49: Before you have a clinical trial actually approved, you have to be careful about, like, recruitment and all of that.
00:30:54: You're not allowed to recruit before them.
00:30:55: But absolutely, we get a lot of inbound of people telling their story and expressing interest in, you know, our product and that.
00:31:04: And it's just incredible, these patient stories and things that we take for granted being able-bodied, right?
00:31:11: You know, something like feeding yourself.
00:31:14: trying to give people back some semblance of independence and then really our goal, which is trying to get them all the way back to, you know, the way that they were before they had an injury or disability, it's truly inspiring.
00:31:26: Do you recall one specific story where you say, wow, this flashed me?
00:31:32: There is one specific story and it's somebody that I know who lost his leg.
00:31:42: His leg got caught in a auger type of a thing, in a grain mill.
00:31:47: He was working in a grain mill.
00:31:51: He went to go try to save somebody who got, you know, started to get caught inside this auger.
00:31:55: His leg got caught, and he ended up losing his leg all the way up to his pelvis.
00:32:02: you know, his resilience and the outcome that he's had in his life is truly remarkable.
00:32:07: But that's one that really stands out to me is, you know, somebody who is trying to save somebody else ended up having a very significant injury as a result of them trying to do this life saving, you know, thing for this person and then still goes on to have like the most incredible perspective on life and determination to accomplish all the things that this person has actually accomplished.
00:32:25: You know, these people are so... resilient in ways that I don't know that I would be myself.
00:32:31: And so that's really what motivates us to do the hard work that we're doing here.
00:32:37: Doing a startup is not easy by any means.
00:32:39: It requires real dedication and belief in what it is that you're trying to do.
00:32:44: Otherwise, nobody would be able to survive this.
00:32:47: And so really working with patience is what gets us through that.
00:32:51: Yes, I totally agree that this sort of resilience and optimism is something very touching.
00:32:59: I mean, we just had a podcast with John McFall and it was incredible to see, well, how much energy is there and how much openness is there.
00:33:12: Now, if you go on, so you are in, well, this critical phase, but in the... Well, perhaps next two years, we might see your first products, correct?
00:33:26: Right.
00:33:26: So you'll see next year, at the beginning of next year, you'll see the first actual patients with the implants using the product doing that.
00:33:34: But in the next two years, about thereabouts, you'll actually see the product on the market and the ability to go receive our product, unassociated with any clinical trial, which will be amazing.
00:33:45: Now, there is also a lot of biofeedback in a certain way that slowly, I could imagine, if you use this sort of, well, neuro link in a certain way, at the end, you can do things unconsciously.
00:34:04: So you grab, you know, you don't have to think of grabbing and then the actor gets active and so on.
00:34:12: But this could become sort of a new normality for these people.
00:34:17: Absolutely.
00:34:18: I typically give two examples or things to think about when thinking about this becoming a natural experience.
00:34:27: In the most ideal world, there's sensory feedback where you have actual sensations through this robotic appendage into your nervous system where It not only moves based on thought, but also you feel it and have feedback based on that.
00:34:42: We're still quite a distance from that in any meaningful capacity on a commercial scale.
00:34:47: But visual feedback and enough hours using a device can be enough to where the system essentially becomes a natural part of your body.
00:34:57: And the two examples that I give, one are gaming controllers.
00:35:00: You start off with a gaming controller and it's very unnatural and you want it on very low sensitivity so that you can get used to it.
00:35:07: But then you can excel all the way up to being a professional gamer to where it's super high sensitivity and it's just... automatic reactions based almost entirely on visual feedback.
00:35:16: The second one is driving a car.
00:35:17: I think that's more relatable to Mac war, which is before you had backup cameras and all of these sensors all over the place that we're all now used to, you still are able to get within, you know, an inch or centimeters of a wall backing up because you have almost this feeling.
00:35:31: That's exactly how you can.
00:35:34: after you utilize it enough, it could become just a natural extension of your body where things are second nature.
00:35:39: You mentioned a very important aspect, which is sensors.
00:35:42: So right now, this is just one way.
00:35:45: But in the future, I mean, if we meet again in five or ten years, would we also have some sort of sensors so that, for example, if I take a cup, you know, the pressure I need in order to lift it has to be different depending on the weight.
00:36:07: Is that something far away or possible?
00:36:13: We are working on technology to make that happen.
00:36:17: It is far away, I would say, meaning, I don't know, ten years, five to ten years away from being commercially feasible.
00:36:27: but not that far away from being, you know, experimentally feasible.
00:36:31: And so there's already a lot of examples in a laboratory environment, let's again, of sensory feedback via direct nerve stimulation and brain, direct brain stimulation, where people feel pressure, what's called proprioception, where you are in three-dimensional space, pain, sometimes temperature.
00:36:51: So... We know the is it feasible versus?
00:36:54: is it not feasible.
00:36:55: We know it's feasible.
00:36:57: Now the question is, how do you actually make it stable and product worthy?
00:37:01: Which is very different from the feasibility question.
00:37:04: And in fact, I would say even harder than that.
00:37:07: But hopefully, if we talk again in five years or something like that, we've made meaningful progress on that and are well on the way to a commercially relevant product that incorporates sensory feedback.
00:37:20: Now, if you speak of the first patient, do you also face ethical questions?
00:37:25: I mean, for example, I don't know if there is a standard with the implant, but it could be that, well, if you have this implant, you have to be with company X or company Y, or will there be a standard, you know, like with electrical connectors that one day?
00:37:45: Yeah, we all agree on a method and a technology which could be used in a wide variety.
00:37:56: Yeah, so there are a ton of ethical standards that have to be met when implanting anybody with technology, in terms of safety, the type of support that you're going to provide, especially within a clinical trial environment, all these different things.
00:38:12: And so we're meeting, of course, all of those standards.
00:38:15: Now, what I think you're getting at is, if somebody has our implant, does that mean that they have to work with, let's take for example, autobock, right?
00:38:25: Does that mean that they have to have this specific autobock hand in order to gain any benefit from our system?
00:38:31: And the answer is no.
00:38:32: So we're building our system to be broadly compatible.
00:38:37: with a whole bunch of different prosthetic limbs because I believe that choice is extremely important and is a very important driver of reducing cost of these systems.
00:38:47: So I think that giving people choice puts the financial incentives in place further to be competition and cost reduction.
00:38:56: And all of these robotic systems are going to be dependent upon control systems like ours and Neuralink and the whole host of other ones, right?
00:39:05: Almost nobody is working on the end device, meaning the computer or the prosthesis or whatever, plus the actual human machine interface, because they're very different problems and a bunch of things.
00:39:19: Now, there's no standard for communication protocol to do that, of course.
00:39:24: And so hopefully over time, and there are people working on this, there will be standards for what's called the APIs, the actual like... language that you communicate between two different products.
00:39:35: there needs to be standardization in this so that you know you can simplify.
00:39:41: both of these systems and know automatically that there's going to be compatibility, such as Bluetooth, right?
00:39:46: You know, you can have any Bluetooth device and I could sync it up like this microphone I'm using here.
00:39:50: that's not an Apple product.
00:39:51: I can sync it up to my Apple MacBook because there is a mutual language that they can communicate across.
00:39:59: That needs to happen within, especially within the disability space because there's a lot of good people out here trying really hard to create good products for very good reasons and not having a standardized communication language makes things infinitely more complex.
00:40:13: And so we have to build out all this infrastructure and architecture to speak in multiple different languages to multiple different types of products, which means that you have less efficiency, bigger products, physically, all these different things.
00:40:25: So hopefully over time, as these things become more standard, there's more standardization.
00:40:29: But if you look into the future right now, a few actions, but this could become like a new language with many different fine actions, etc.
00:40:41: So, I mean, if I just look at the advances in electronics and computer science, I could imagine that in a few years, the complexity is really going to go up, up, up.
00:40:56: Absolutely.
00:40:59: The more people, the more commonplace systems like this become, the more complicated things will become.
00:41:04: And then you get into like transhumanism and, you know, what, where does this all lead when taken to its extreme, which I think is incredibly interesting.
00:41:12: And so I think, yeah, there will be like some new language that allows you in requirements for not only data that we send to a robot, but also data that the robot sends to us.
00:41:22: It is actually already today a bi-directional system where you need to know what the motors are doing.
00:41:28: by data coming out of the robot in order to control the motors to tell them to do the next thing, right, for finger motors or whatever that is.
00:41:35: So there's a bi-directional communication path, and so we'll have to land on something that allows you to really take advantage of these next generation.
00:41:43: Look at humanoid robots, for example, right?
00:41:45: Really complicated, massive degree of freedom systems that these neural interfaces are going to be able to control.
00:41:53: in a way, I believe, that's not possible with any external or handheld controller.
00:41:57: I have a degree in particle physics and I worked in a nuclear research center and I remember you know in those days we called them hot cells so this was highly radioactive material.
00:42:13: And you could not touch it.
00:42:15: So you had a lot of, you know, arms to sort of manipulate the stuff.
00:42:24: So I could imagine this is not only in prosthetics, but you could also have applications where sort of my data is being sent to another place.
00:42:34: Even, you know, you could imagine you have the arm or hand at your place in Austin, Texas, and from Germany, just by thinking I could sort of have some action being done on the other side.
00:42:51: So we've already done that.
00:42:51: So we've worked with a company and we've controlled their robot in Mexico from our office in Austin.
00:43:00: So that's certainly been done before.
00:43:03: You can imagine a surgical robot, a really complicated high degree of freedom system that today requires a massive control console for the surgeon to go sit within, that's the size of a refrigerator.
00:43:18: That can all be replaced eventually by VR goggles and implants like ours.
00:43:24: And it is feasible, I believe, as the technology gets smaller and better over time, that someone like a surgeon where they do this stuff all day every day would we get an implant or do controller.
00:43:32: And then you can do remote operations where you could have somebody in remote wherever that needs a quick operation and a surgeon here in Texas that's able to remote into this portable surgical robot in order to make things happen.
00:43:46: There are people already working on things just like that.
00:43:48: So in other words, this is not only people who are amputees, but there is also a path.
00:43:57: towards people using an implant, although they have both hands, but they use the implant for the data which then would be transmitted somewhere else and would control a robotic arm or something like that.
00:44:13: A
00:44:13: hundred percent, or just the internet of things around them.
00:44:19: Elective implants are nothing new, right?
00:44:22: So we talked about a lot of that within the cosmetic realm.
00:44:26: We are already well into the evolution from wearable types of technologies to implantable technologies.
00:44:34: Glucose monitors are a great example of that.
00:44:36: So almost everybody knows a handful of people that do not have diabetes that wear continuous glucose monitors in their skin.
00:44:44: And so that's a very minimally invasive type of technology, and you have to put this thing on and all of that.
00:44:50: Now, if somebody could get just a little implant, that just goes under their skin, where you get this one implant and you're good for years, then many people would elect to do that.
00:44:59: An example where that already exists is in birth control.
00:45:03: And so there's a product called Nexplanon that gets inserted under the skin, kind of in between the bicep and the tricep right here, by your primary care physician, and then you're good for years, three years, thereabouts.
00:45:16: And it just continuously elutes, releases birth control so that you don't ever have to take a pill.
00:45:21: these things already exist.
00:45:23: And so having an implant that goes under the skin that has some functional benefit to it that wirelessly can talk to all the technology around you and be kind of your universal, hands-free controller is something that will absolutely happen over the coming, you know.
00:45:39: five, ten years.
00:45:40: Everything starts with a medical or military application and then turns into something that we all take for granted.
00:45:46: GPS is the perfect example in the internet.
00:45:48: The same thing is going to happen with smart implants.
00:45:51: Now, we could argue, are people going to get brain implants electively?
00:45:55: I'm not sure about that.
00:45:57: But under the skin implants, absolutely already exists in multiple different applications today.
00:46:01: Well, it's hard to imagine right now, but if we just recall that Nowadays people have pacemakers, have artificial hips, have a multitude already of technology which is being implanted into their bodies.
00:46:21: So in five or ten years this could be just a new interface and you can even have devices which people with the implant can use.
00:46:33: Is that a vision?
00:46:35: Absolutely.
00:46:36: The vision is ultimately that it's compatible with the internet of things around you and so that you can use this to control your laptop or your cell phone.
00:46:45: refrigerator, whatever it is, so that you don't have to have these handheld control interfaces and tap and touch interfaces, which allows you to be much more efficient ultimately, especially under certain circumstances.
00:46:57: And so that's definitely the direction that we want to head.
00:46:59: And once we are an FDA approved device, you know, you can start to move towards that direction.
00:47:03: Let's walk through this sort of almost world of science fiction.
00:47:07: So it could be that You are being hacked, so suddenly, you know, your hand does things which you don't want to do, but the data channels coming in are being hacked.
00:47:23: So the security of this device has to be quite, quite big, isn't it?
00:47:31: Absolutely, yeah.
00:47:32: So we have multiple different layers of security for our system.
00:47:37: One example of this is that we actually tag all the different pieces of our device with unique identifiers.
00:47:44: The same way that you can securely pay with, you know, tap and go, you know, off of your phone.
00:47:51: That's actually this technology that's a magnetic transfer of information momentary.
00:47:57: We have the same technology built into our system.
00:48:00: And so we utilize a lot of the same infrastructure associated with like secure payments in order to uh, make sure that our system is secure.
00:48:08: And then also, um, you don't not only do not want someone hacking into your system, uh, and wrongfully controlling, you know, your limb or whatever it is, you don't want two people with your implants to be side by side and erroneously, I'm controlling my friend's limb and he's controlling my limb because.
00:48:26: this system to get confused.
00:48:28: And so having things like unique identifiers ensure that your system is only able to communicate specifically to your little edge compute system, which can only communicate to your prosthesis.
00:48:37: There's all this infrastructure built in to ensure that there's high specificity and high security with each person's system.
00:48:44: But it's going to be a very interesting.
00:48:46: Topic as things move forward.
00:48:48: Yes If if you look at the future, I mean phantom neuro we talked about it auto buck well invested a lot.
00:48:58: so There is or it seems that there is quite a bit of interest.
00:49:04: What are the groups you are looking at who are slowly knocking at your door
00:49:10: in terms of like acquisition potential and all of that?
00:49:13: or partnerships or
00:49:16: Applications?
00:49:17: even
00:49:18: yeah, so more so on on the application side.
00:49:21: I think that's more interesting.
00:49:23: So for example like humanoid robots One of the biggest problems with humanoid robots is that there's no massive database associated with human movement to feed in to these robots to make them move like humans, whereas for like an LLM is trained on human generated language data scraped from the internet.
00:49:43: self-driving cars is generated by human-generated driving data.
00:49:47: Humanoid robots, ideally, are trained by human-generated movement data.
00:49:51: Well, it's really hard to get that.
00:49:52: And so I think that as these neural interfaces become more commonplace, we'll start to develop these massive data sets of really high granularity movement data that can be fed into that.
00:50:04: And so that's something that's very interesting for us.
00:50:08: Interacting with things like computers, or vehicles or things like that are something that's of interest to us.
00:50:17: Interacting with different types of prosthetic limbs, for example, you can have different types of attachments.
00:50:24: One of the things that people usually smile about if they see a video is that an amputee who has a robotic limb and wrist can spin the hand indefinitely in a single direction, right?
00:50:34: And that's something that our biology does not.
00:50:37: allow for.
00:50:37: And so that's a great simple example of somebody with a disability who's now able to do something that is superhuman, right?
00:50:45: And so you can become super-abled and you can have different types of attachments for your prosthesis that can allow you to do things that intact, you know, biological limbs are not capable of.
00:50:56: And so that's something that's very much of interest to us.
00:50:59: And so there's all kinds of applications.
00:51:02: It's interesting, I visited a lab in Sweden and they were focusing on your eyes, you know, eye control, you can do a lot of things there.
00:51:15: And so I tested it out and what was interesting is in a certain way, well, I felt... On one side, it was great.
00:51:26: You can control things very fast.
00:51:28: On the other side, somebody else was knowing what I was doing.
00:51:34: Could this also become a topic here.
00:51:37: that you can record the bio of a person and everyday's life and actions and intimacy plays a role?
00:51:51: How do you look upon that?
00:51:54: I think that we can be, in theory, different from eye-tracking in that you do not have to do full hand-finger movements in order to generate the signals that we utilize.
00:52:07: You can do very subtle flexions of your fingers that you know you're doing, but visually you don't see anything happening and use that as a control input.
00:52:15: And so you can generate data and control things without anybody knowing.
00:52:20: what exactly you're doing.
00:52:22: Now, there's a lot of metadata associated with a person and their system, right?
00:52:27: That's all anonymous and, you know, architected from a company perspective in the right way.
00:52:34: But whenever we're talking about things like that, you know, privacy and your data and whatever that is, I always bring up that people are already walking around with a cell phone that collects infinitely more data than any of it, like way more important data about who you are as a person, the things you like, the things you don't like, what you're doing on all of your different mobile applications.
00:52:55: If you're worried about that and don't want to choose a product because of that, then you should get rid of your cell phone because that is a much bigger security risk to you as an individual than a neural interface.
00:53:09: The thing that a neural interface, I guess, has that a cell phone doesn't have, in theory, is hacking into the human body, which I don't really believe is, at least for the very foreseeable future, a big problem.
00:53:23: But in theory, it has that unique aspect.
00:53:25: But in terms of data, your cell phone's worse.
00:53:27: You spoke about this very fascinating idea of, well, setting up a huge database for... humanoid robotics or other things.
00:53:42: But to what extent is it a problem that every person is different?
00:53:48: So do you have a database which sort of matches?
00:53:53: or must you sort of distill, well,
00:53:57: some
00:53:57: sort of reemerging patterns from person to person?
00:54:02: Very interesting question.
00:54:04: so meta for example who has a wearable EMG arm band and their ultimate goal is to have People anybody be able anybody that's able bodied be able to put on this wristband and it automatically works.
00:54:17: no calibration or training associated with that.
00:54:20: in order to have any chance of doing that you have to have data from probably tens of thousands of individuals to try to find commonalities between all of those people to where something can work right out of the box.
00:54:34: And so they've put out very interesting research articles associated with that.
00:54:39: Again, research is very different from actual real-world implementation for a naive user that's never used the system.
00:54:45: And I believe things when I see them.
00:54:47: But they have really compelling evidence in the literature.
00:54:52: That's on able-bodied people.
00:54:53: who more or less have identical anatomies in ways that they do different things.
00:54:58: Everybody's injury is different.
00:54:59: And so you add this very significant layer of complexity when you're talking about people with disabilities.
00:55:06: because let's say somebody, you have a median nerve and your median nerve controls the majority of closing your fist.
00:55:11: Well, I could have a median nerve injury and you could have a median nerve injury.
00:55:15: And the way that that presents its impact on how our muscles move or don't move would be radically different.
00:55:21: And so it is damn near impossible to find commonalities that allow something to work out of the box for people with disabilities.
00:55:31: And it may or may not be possible for able-bodied people given enough time and enough data.
00:55:36: So all this to say that a system like ours for the foreseeable future for people with disabilities will have to be calibrated on each person's individual data for to work.
00:55:46: Now, there's also the fact that you want the system to work really well.
00:55:50: for you in particular.
00:55:51: I don't care how well it works for my cousin.
00:55:53: I want it to work really well for me.
00:55:55: And so I want to calibrate it specifically to me so I get the best, most optimized experience for myself rather than a suboptimal experience where I don't have to calibrate like Meta might provide.
00:56:07: So that's how it works.
00:56:08: But you want to learn over time in the cloud with this amalgamation of data so that it works better and better out of the box.
00:56:15: And then you have this ever increasing scale of efficacy.
00:56:18: But even there, I believe, it will remain in a sort of individual arena because, I mean, if you look at movement, you can just by the way a person moves.
00:56:30: say, oh, that's him or her.
00:56:33: I just remember, you know, my wife, when we go skiing, even from a far distance, I know just, well, the way she moves, this is my wife.
00:56:43: So this individual patterns, this individual movement is going to stay.
00:56:49: It's not that, you know, at the end, all the users of Phantom Neural will have standard movements.
00:56:57: That's exactly right.
00:56:58: So actually you bring up something really interesting.
00:57:00: So there's actually people working on with really compelling evidence of rather than doing facial recognition, there's gait recognition and they're able to almost perfectly determine who somebody is based off of their walking gait.
00:57:11: So that shows you right there the differences in people's musculoskeletal performance, so to speak.
00:57:18: So yes, our system for the foreseeable future will require personalized training data sets to optimize each person's experience for themselves.
00:57:28: And we like it that way.
00:57:29: And well, before we slowly come to an end, you naturally also have exoskeletons.
00:57:37: So you can use this in machines to power people to do all sorts of things.
00:57:46: That's also an application, isn't it?
00:57:47: Absolutely.
00:57:48: So for any robot.
00:57:50: that's trying to augment your function, whether that's injured function or able-bodied function.
00:57:56: The more data you have from the body about what the body is doing or is intending to do, the better the system is.
00:58:03: And so for exoskeletons, there's been a lot of academic effort put into.
00:58:07: can you incorporate muscle electrical data and then some on the brain side to better allow the exoskeleton to do what it is that you want to do.
00:58:17: So in a rehab environment, you might have an exoskeleton for somebody with a spinal cord injury that's partial that you still want to relearn how to walk, where they're actually controlling the exoskeleton with a joystick on the exoskeleton.
00:58:30: So it has nothing to do with what their limbs are doing or attempting to do.
00:58:33: It's a joystick where you push it forward and it starts to walk forward and then you try to follow it.
00:58:39: Imagine an elderly person who obviously can't walk as well as they once did and falls are a great risk.
00:58:44: If you break your hip within two years, you're most likely going to pass away after that statistically massive problem with gate for elderly individuals.
00:58:53: Well, they don't have the strength to drive this exoskeleton and command it around them.
00:58:58: They want the exoskeleton to augment what they're doing because they don't have the strength.
00:59:01: And so you need some way to know what the human is doing.
00:59:04: And so human machine interfaces like this will provide that data layer in order to drive these exoskeletons and make them much more reliable and safe.
00:59:13: So with exoskeletons, you have this additional concern where you have these really strong robots that now you're wearing on relatively fragile
00:59:23: bodies,
00:59:24: right?
00:59:24: And so you can actually break your body with an exoskeleton if it's not.
00:59:30: regulated in certain ways and not in tune with what the body is actually trying to do.
00:59:35: But we absolutely are interested in that and working towards a system that's compatible with that kind of approach as well.
00:59:41: Well,
00:59:41: let's end up with the future.
00:59:44: I mean, in ten years, I will be seventy six.
00:59:50: So slowly, you know, these questions of mobility could become more and more important.
00:59:58: By then, will we live in a world where neuroimplants, where, well, this sort of new interface is becoming normal?
01:00:08: Yes, absolutely.
01:00:09: I believe so.
01:00:11: I think they'll become really widely used over the next five-ish years, because I fully believe that it will be a stepwise change in quality of life for people that receive these implants.
01:00:24: And the technology that's used today, for patient outcomes really has not changed in a very long time, right?
01:00:30: That open and close hook was created in the US, say, during the Civil War, right?
01:00:36: A long, long, long time ago.
01:00:38: And so we have the chance now for people with disabilities to benefit from the monumental advancement of robotics that's taking place over this time period that to date hasn't been able to impact them.
01:00:48: So I think ten years from now, it will be uncommon, I hope, to see an amputee that's not using a robotic.
01:00:57: prosthetic limb, upper or lower.
01:00:59: I hope that we will see less people in wheelchairs and more people utilizing kind of low profile exoskeletons in that.
01:01:07: I believe that we'll see people with disabilities with these controllers, not only controlling prosthetic limbs, but also their phones and their computers and things like that, definitely within five years.
01:01:18: So these things are platform controllers where A person with a disability who gets one will do multiple things with this single controller.
01:01:26: It's not like it's just used for a prosthetic limb.
01:01:28: It will be used for them to have a higher quality of life in multiple different aspects.
01:01:32: And so, ten years from now, I think it's going to look very, very different than it does today.
01:01:36: It's very exciting.
01:01:37: If we look into the future, at least, what you sort of say is there's a lot of hope of technology, well... helping us in a way.
01:01:50: My last question is, you know, is this going to be affordable?
01:01:54: So is this technology also cost-wise going to come down so that many people can use it?
01:02:00: Absolutely.
01:02:01: So the cost of a system largely is determined by, you know, what are the input costs, cost of goods, right?
01:02:09: For the actual device to manufacture it and all the R&D that took place.
01:02:12: But also how much infrastructure is required?
01:02:15: per patient to receive something.
01:02:16: so for our system like i said outpatient procedure non-specialized surgeon no general anesthesia.
01:02:23: Thirty minutes to maybe an hour right that's a very low overhead procedure that can approach an elective type of a procedure cost right which needs to be on the order of thousands of dollars.
01:02:34: and so we're actively working to get our cost of goods.
01:02:38: as low as possible and to simplify the procedure as much as possible so they can get within the cost range of an elective device and procedure.
01:02:47: Now, something like a brain implant requires a specialized neurosurgeon, really complicated CT scans for pre-surgical planning, all these different things that are very, very expensive.
01:02:57: And so that's a much harder business model to move towards, especially to make elective.
01:03:02: God forbid.
01:03:02: And so, yeah.
01:03:03: Yeah.
01:03:03: Well, unfortunately, Connor, you are too late.
01:03:07: I remember Stephen Hawkins, whom I met, you know, the famous physicist.
01:03:13: And as you know, communication was really difficult.
01:03:18: And I could imagine that with today's possibilities, well, he could have had an even more enriching life, isn't it?
01:03:29: absolutely a much more enriching life with these.
01:03:32: A lot of people don't see what actually takes place behind the scenes.
01:03:36: You know, for someone like Stephen Hawking or something like that, you know, you always see the short clips of, you know, well thought out, long sentences and all of that.
01:03:43: What you don't see is the amount of time that it takes to generate that sentence and how Stephen Hawking, for example, went about generating that and selecting the letters and all of this.
01:03:52: And it was very complicated the way that he actually communicated.
01:03:56: And so I
01:03:56: know,
01:03:57: so systems like this will make that type of communication much, much easier because it's much higher bandwidth level of communication.
01:04:06: Well, Conor, thank you for sharing your journey and we'll be following closely as Phantom Neuro moves into, well, the first trials with humans, which is going to take place next year or in the next two years.
01:04:23: And thank you for your insight and inspiration.
01:04:28: And actually also thanks to Otto Bock for making this podcast possible.
01:04:34: And well, if you enjoyed today's episode, please subscribe.
01:04:38: Thanks for listening.
01:04:40: And Conor, see you soon.
01:04:42: Thank you so much.
01:04:42: Real pleasure.
Neuer Kommentar