Catherine Mohr:
Daniel asked me to talk about robotics and exponentials. And since I'm following Neil's excellent talk about AI, I won't spend a whole lot of time on the decision-making structures within robotics, more on the really-- the mechanisms and the sensors in robotics. But I will talk about how that fits in exponentials. I'm also coming ahead of a talk on 3D printing a couple talks down in this sort of introductory section. And 3D printing is another form of medical robot. And because there is such a wide variety of medical robots, I thought I should perhaps start with a little bit of a definition.
And so you go to the Oxford English Dictionary when you want to find sort of what society has as a consensus on what a robot is. And I was dismayed to find that this seems to be the primary definition for a robot out of the Oxford Dictionary-- a machine resembling a human.
That's like a pop culture version of a robot. This is what the movies tell us robots are. Most of the robots in the world have nothing at all resembling a human. Able to replicate certain human movements and functions automatically-- this is what the movies think the robots are.
The more general case was the second definition that they showed up with, which I would have put as the primary definition-- a machine capable of carrying out a complex series of actions automatically, especially one programmed by a computer-- programmable by a computer. Emphasis mine on that. And so when you really think about, what is a robot?
And what would then be a medical robot-- this is the definition. It's at its simplest level a sensor. A sensor that is giving inputs into a decision-making function, which is then executing what it is that it wants to do in the world, through some form of an actuator, and then there are feedback loops in this.
And at its most fundamental level, this is really what a robot is. And so when we think about medical robots, this is the closest that we have right now to that first-level definition where it's humanlike. And you'll notice it's all about the eyes.
These are cute. These are companion robots. These are coach robots. We're very hardwired to be able to react to something that's a pet and to feel good about it as we're sort of petting it and cuddling it. And so this form of robot is associated with well-being, combating loneliness.
The coaching robot on the left side, Autom, is a weight-loss coach. And it gives you advice, and it talks to you. And again, about your health, but these are really not those full human-facing AIs that we think of with general intelligence.
These are very, very specific. And they've got a very particular function that it is that they're trying to do. On the left is probably the most ubiquitous robot in the world, and it's far from cuddly. It's the washing machine.
But these are medical robots also. They are responsible for keeping people away from infectious material as they are washing it. It keeps cleanliness, but it's not very cuddly. We don't really think about it so much as a medical robot, but it does have those aspects to it.
It's also part of keeping longer independent living. If you have technologies like this in the house, you are not having to do all of the fiddly day-to-day, self-care activities. There are things that can help you in terms of these labor-saving devices and can keep people aging in place.
But when we think about from beginning to end, from soiled linen to it clean and put away, the robot on the right is trying to fold socks. It's trying to take them out of a basket, pair them up visually, be able to manipulate them in such a way that they can be matched, and being able to fold it in a way that you can then put that laundry away.
And this is a very complicated problem, and it's one that we don't really have. There's a lot of research being done in these areas. But this kind of life extension robot is very much still in the-- or independent living extension robot is still very much in these early prototype phases.
So you've got MiniSpot from Boston Dynamics, which-- if you've seen the videos of it-- terrified that it's going to break that glass as it's putting it into the dishwasher. But I'm also terrified that my 14-year-old is going to break the glass putting it into the dishwasher as well.
My 14-year-old doesn't break them terribly often, whereas the kitchen-- this is the robot kitchen that is supposed to interact with human-used pots, pans, utensils, and be able to cook for you. So we'll see about those coming out in the near term, but they are still fairly far off.
These unstructured tasks are the hardest thing for us to do in robotics. And they're the hardest because they require independent decision-making. They require sensors that are touching a lot of the things. They require distance sensors that let them look around, be able to identify objects.
And they require a lot of cognitive processing power in order to have elegant interactions, to be able to not be menacing around a person, and to also be able to do a lot of the very complicated things that humans do naturally with our manipulators.
So one of the reasons why this particular home-care robot video is so damn funny, especially for people who work in robotics and are trying to make this a reality, is this is kind of where we are right now. No matter how hard you're working on trying to solve this problem, we're kind of still here.
But some robots are enormously successful. We often don't think about this as a medical robot, but this is a gene sequencer. And we think of it in terms of its output. We don't think of it in terms of it being a robot. Similarly, 3D printing. We think of it in terms of its output. We think of, what do we get out of 3D printing?
Not, what are the robotic technologies that enable it? But these two machines are incredibly elegant robots. And when we think about how we're interacting with them in the medical world, it's really about, how are we interacting with their products? So I'm going to show you the same graph that Daniel showed, because this is just so exciting.
This is how much it costs us to sequence a genome. And for robotics-- are what drove this curve down. Plus, some really brilliant thinking about the process that you were going to use that you were going to have the robots implement.
But what the robots do is they get contaminating humans out of the loop of doing a lot of this testing. And they increase the precision, and they increase the speed, and they increase the repeatability of being able to do this. And this is why we can automate this process.
When we think about what we can do with 3D printing, we're so focused on how we can augment the body, how we can use these robots to make things that let us conform the tools that we are making to the individuals.
We can also think about it in terms of implantables. How do we use different sorts of materials to put them inside the body, make a scaffold that then the body heals by incorporating that scaffold in? But these are outputs of medical robots.
And then my favorite, the da Vinci Surgical Robot. Now, we think of it in terms of process. That process is surgery. And you might ask, though, is it really a robot? So far it looks the most like a robot of all of the things that I've shown you, and it certainly meets the definition.
It's got sensors in its joints. It's got sensors in the instruments. It's got sensors all over. But the decision-making function is human. So the surgeon is sitting at a console, using the sensor that's coming back from the robot, and is making the executive decisions associated with manipulation.
And so while you think of a da Vinci Surgical Robot as having these kinds of manipulators through small incisions, doing surgery inside the body, it's still the human's manipulators that are moving those input devices. And those robotic manipulators are just the output of that. And so you might say this completely fails our test.
We have to kick the da Vinci out of the robot club. But on the left is the input mechanism. It's actually a robot all by itself. It is a gravity-balanced device that has motors in each one of those joints and sensors in each one of those joints. So that when the person is moving that controller around, it is not dragging with gravity.
It's compensating for the gravity, and it's sensing where they are. In addition, on the right-hand side, each one of those arms is independently a robot. It is controlled separately. It has a command, and inside are all of the decision-making systems for sensor, decision-making, actuator.
This is a robotic exoskeleton. Another thing that we might think of-- oh, you've got the human in the control. And while the human is in the control, this is really more of a wearable robot, similar to when we were thinking about-- what Daniel was talking about when he was talking about exoskeletons.
That there is sensors on-board that are sensing the user's intent, being able to adapt to that, and being able to drive the motors. Similarly, with David Eagleman's sensory VESTs. This is a robot that you just kind of wrap around you.
And it's sensing sound, figuring out how to translate that into a set of stimuli that's on the back and then running its actuators to give an output. So this is a wrap-around wearable robot. And so in some ways, you could kind of think of the da Vinci as a special case of a wearable robot.
It's just a spectacularly immovable, wearable robot. And then there are wearables. And you would rightly say, these probably aren't robots by themselves. But nothing in our definition of robots says that the sensors and the actuators need to be co-located.
And if we're thinking about these sensors going up to the cloud, where the cloud is observing our activities, and understanding our habits, and all of this sort of thing, we can then have that algorithm say, she probably wants a cup of coffee when she gets out of the shower.
And it can talk to my Internet of Things-connected coffee pot. And it can have a cup of coffee waiting for me when I get out of the shower. Now, whether caffeine is strictly medically necessary is up to debate, but it is about well-being.
And so in some ways, maybe stretching to call this a medical robot. But the principle is exactly the same if we are talking about something like a smart toilet, where we are sensing the metabolites coming out of your urine.
And now, you can start to say, well, what are the actuators that would come out of a distributed robot like this? Would I talk to my doctor or pharmacy and adjust my medications because I'm peeing out too much or too little of the metabolites of my drug?
Would I tweak my grocery list and put more fruits and vegetables into my grocery list? Or would I talk to that future chef robot in my kitchen and say, we are totally cutting Catherine off from all of those cream sauces? No more cream sauces for you.
So we have to be careful what we wish for in this sort of a situation. But when we think about wrapping a robot, or wrapping a set of robotic technology around the entire patient, our definition of robot starts to get very broad. So I promised you robots and exponentials.
And you may be wondering, where's my Moore's Law? I need my Moore's Law. Here's your Moore's Law. One of the interesting things about exponential growth in computing power and things coming smaller and smaller and smaller is there's a really nice metric in terms of numbers of transistors.
And as you may have noticed in all the robots I talked about, it's very hard to have a common metric, because robots aren't really a single technology. And in most cases, technology makes an exponential change, not because it has a really long run, like we've had with transistors, but because it jumps from one technology to another.
And you jump from one technology S-curve to another. And so I had this kind of fuzzy, technological performance graph up one side. This is my kind of placeholder for the robot goodness and its extension over time.
But if we go back to sensor, decision-making, and actuator, we can then start to say, well, what kinds of technologies are making jumps? And if they're making jumps in those sensors and those actuators, I'm going to not tackle decision-making, because Neil did that so well prior to this.
But you can see that there's a lot of technological jumps going on in the decision-making side. But what's driving a lot of the things that we're thinking about as futures are new kinds of sensors and new kinds of actuators. So we start with the sensor.
This was a classic-- God, I love this kind of an engineering problem. This is a shape sensor. It's a glass fiber that over the span of a meter can tell the pose of the tip within a millimeter. That's XYZ in space, and the angle that it's at.
And it can tell you the shape all the way along. So when that kind of a technology lands in your lap, you say in the medical robotic world, what could we do with this? Where are there unsolved problems in medicine that we could take a sensor like that and enable something?
And what we made is a catheter. And this is a robotic catheter that you can drive, and steer, and you'll be able to hold a position with it. And Daniel teased you with some of our-- a new announcement, because we completed our first in-human trials with this robotic catheter.
And this is to give you an idea of why you would want to solve-- why you would want to have a stable catheter. This is driving into the lungs to get to a tumor. The situation is very difficult in there. It's a tortuous path, super easy to get lost. The person is breathing.
There are secretions. There are a lot of things that are going to keep you from getting all the way out to that surface. But being able to do a tight control loop around it, and being able to have it be a stable platform for getting out to the periphery of the lung, allowed us to go after nodules.
And if there's any pulmonary clinicians in the room, these numbers in terms of the 12.3 millimeters and the 14.8 millimeters practically caused a riot at the CHEST conference. Because this is the size of a tumor that we don't really go after right now with any kind of intrabronchial approach.
Because the yield is just too low to be able to get them with any kind of a reasonable accuracy. And we were able to get tumors and sample them all the way out to the periphery at a size that just generally isn't even attempted. And so this is not coming out in the very near term. This is first in human use on this.
This is part of a long process that one goes through to develop this kind of a technology, but the early results are very exciting. Why do we care about 1-centimeter nodules? Why do we care about 12-millimeter nodules? It's because lung cancer is very, very deadly if it's much larger than that.
If stage 1A and 1B in five years, you generally have between a 75 and 80% chance of still being alive if we've sampled that tumor and identified the lung cancer when it's in that 1-centimeter size. It gets much bigger than that, and it falls off very rapidly.
And the worldwide average-- when we detect lung cancer because it's become symptomatic, the worldwide average is 15% of people live to see that five-year anniversary after diagnosis. And so when we saw a new sensor, it was a long path to figure out that this is where we were going to go with it, but it's a very exciting path to get there.
We jump over decision-making over to actuator. I'm going to give you another example of an actuator. So this is a very fun actuator. This is opto-microfluidics. This is nudging water with a laser. And if you nudge water with a laser, you can nudge gently what's in the water.
And you can sort bull sperm, because sperm with an X chromosome in them are a little heavier than sperm with a Y chromosome. Because there is more mass to your X chromosome. And so why sorting bull sperm-- is because this was developed in New Zealand. And the agricultural industry is enormous in New Zealand.
And so everyone says, oh, well, you can sort cells? Let's sort sperm, so that we can figure out-- how can we get more females, so that we don't have to cull quite as many males out of the dairy process? This is a really fascinating technology, because it doesn't just sort the cells.
It keeps them viable at the other end. And so I'm anticipating that something like this is going to explode some of the things that we are thinking about doing if we want to be able to rapidly sort cells, and then be able to make decisions, and put them back into a body in a viable state.
And so the question that you always need to think about is, I don't have a single axis to plot goodness for exponentials. But when new technologies come out, the fundamental question is, what does this let us do that we couldn't do before?
Where are all of the problems that we're trying to solve in medicine that have that sort of "if only" step on them? And does this new sensor or this new actuator get us over that "if only" bit? Because these jumps are going to happen when you put a new capability into something.
And it allows a big step change in terms of the kinds of problems we're trying to solve with these. So I'm excited about the role that robotics is going to continue to play. It's going to be big, and complicated, and messy, and heterogeneous.
And there will be some really interesting step changes that happen as we get new sensors and actuators. And there'll be some fascinating step changes that happen as we get new decision-making engines, but that's where all the fun is. So thank you very much.
[APPLAUSE]