Ramesh Raskar:
To solve some of the biggest problems in health, how can we make invisible visible, Whether it's deep biophysical matter, or empowering some hidden talented minds? But before that, let's talk about what's already visible. How many of you think these two lines are different? Which one is longer-- the one on the left? The one on the right? Equal?
All right, sorry I tricked you there. But as you can see, we don't see with our eyes. We record with our eyes and see with our brain. And the brain is amazing. It can be fooled, because of the processing. But can we also give some superpowers to this brain, to this computational unit?
Can we create devices that can see around corners, beyond line of sight? Can we create vehicles that can see through fog as if it's a sunny day? Can we read a book, without opening the cover, page by page? Can we create all images that looks like X-rays, but just use ordinary light?
So to look at how to make invisible visible for biophysical matter, often we think about, how can we use a new electromagnetic spectrum, such as X-rays or UVs-- something we cannot see with human eye? But I think that trend-- that kind of research, that kind of invention-- is almost finished. We are unlikely to find another EM spectrum that has, somehow, new magical abilities.
I think what's happening, very much like the puzzle I showed you earlier, is that it's not just what you record, but how you process. And so for that, we have a new methodology called femtophotography, which actually exploits scattered light-- something we are not very aware of.
Say if I take a laser pointer and turn it on and off very quickly, I can create a very narrow packet of photons. So microsecond, nanosecond, picosecond-- or ten to the minus 12-- femtosecond-- ten to the minus 15. And if I do it very quickly, the packet of photons is going to be very, very short-- just maybe a millimeter long. And if I take this bullet of photons and fire it inside a Coke bottle, you can actually see light in motion. Because femtophotography is so fast, it can create movies of light in flight.
So we send this bullet of photons, and as light propagates-- these photons propagate-- most of them go in a straight line to the center of the bottle. But meanwhile, a lot of the light energy is propagating on the table, as if you're throwing a stone in a pond of water. Most of the photon hit the cap, they hit-- but you see the air bubble at the top? Light is bouncing around. In the meantime, the waves are propagating on the table, and after several picoseconds, because the curvature, is the light focusing at the back of the bottle, as well.
Furthermore, this whole thing takes place roughly in one nanosecond. That's how much time it takes for light to travel about a foot of the bottle. So we do this computational technique to slow it down by a factor of 10 billion so you can actually see light in motion. So this is not a kind of "Coke versus Pepsi bottle." Really, we want to see what we can do with femtophotography.
So how can we use this to see around corners, beyond line of sight? The idea, actually, is literally straightforward. And we do this with sound, anyway. I can hear a person who's around the corner without line of sight. And that's because I can hear the echoes of that person's voice.
I can do the same thing here. I can shine light on the door. Part of the light will scatter and go into the room-- some of it to a person of interest. A fraction of the light comes back to the door-- an even tinier fraction back to the camera. And analyzing that, you can create 3-D models of what's inside.
It's not just science fiction. We built this. We published it. A lot of teams are working on it. There's a new DARPA program out there that explores scattered light. It came just last year-- a $30 million program to look at scatter light-- to look at femtophotography.
And we're not there, but this explorational technology can allow us to build vehicles that can avoid collision with what's around the corner, create new types of endoscopes to look inside your lungs or colon or heart, beyond line of sight. So this femtophotography-- exploring scattered light-- separating the eyes from the brain-- allows us to do something magical here.
So think about a million finger endoscope. Stepping back from this, imagine putting your hand in a cookie jar, completely with closed eyes, and you can still pick out the cookie. Why is that? The reason we can do that is because we have sensors at the tip of our fingers.
So imagine now if we had a million fingers on an endoscope. What would that do? And we want them to be microscopic so they can go in many parts of the body. So instead of worrying about, hey, there's a turbid liquid medium. There's tissue. There's blood.
Actually, we can take these microfibers and just insert them wherever we want to go. And by using femtophotography, when we shine light through one fiber bundle, it arrives at tips of other fibers, and we calculate the time of arrival. And we shine light from other fiber and calculate the time it takes to go to other fibers. And by doing this tomography, we can actually create 3-D models in completely turbid medium.
Then of course, you can imagine doing endoscopy with this optical brush, and spectral information that we can capture from there. And when we published this work some time ago, we realized that it allows us to create an endoscope, not just for creating images, but creating 3-D images, and also analyze the scattering properties of the tissue or the biophysical material that we care about. We can start measuring on skin, even without shaving your hair, for example. We can have this optical brush penetrate through.
And then we can also do a lot of targeted deliveries. So far we have been mostly reading. But we can also start doing delivering.
So we think the femtophotography is one of those exploration technologies that challenges our notion of what it means to do photography or imaging. And we can use a similar technique to embed optical brushes in your toothbrush, and do an imaging of that kind. Of course, this will take some time to bring it in a form factor that's as easy as a consumer toothbrush. But we could get there.
What about seeing through really thick material? So here, you have a 1.5 centimeter thick block. And we can put a letter behind it, and we can still read through through femtophotography, by analyzing time of flight of what it comes through.
So imagine, again, doing noninvasive measurements in many settings. And then we can take that one step forward by looking at fluorescence lifetime imaging. Of course, that's very well known in XVIVO. But now we can distinguish between the lifetime that's impacted because of a cancerous cell versus a benign cell, where the person's lifetime is different. And the confusion often is, is the lifetime different because of binding or is it because of scattering? And using femtophotography, we can distinguish that, too, by just analyzing it over different frequencies-- over different wavelengths.
So so far, I was telling you, hey, let's look at light. Because most of the human body behaves in very unique ways in optical spectrum. If you go to X-rays and other wavelengths, the body actually does not have a functional representation. We can just mostly see through. But at light, we can start looking at hemodynamics, and many cardiovascular conditions and interactions between tissue and metabiological conditions.
But sometimes, we can take this idea of femtophotography and also apply it to other electromagnetic spectrum. So for example, to read through a book, we can use terahertz imaging, which is about 100 micrometers in wavelength, and we can read through a book page by page. So we just published it literally a month ago, in early September. And in this case, this is a video that shows we can take nine pages-- not a whole book-- just nine pages so far, as of today-- write in ordinary ink on top of that, and use that terahertz scanner to scan through it. It's like an OCT in opthalmology, you might be familiar with.
But the problem is that these characters all are going to overlap on top of each other. And terahertz actually penetrates through, but doesn't give you enough contrast between ink and paper. It's only about 4% of contrast. So we're to solve multiple problems of dealing with very small changes in refractive index, overlapping of text, and also the fact that the thickness of the paper is only about 25 micrometers-- half the width of a human hair.
And then, as I said, we can use femtophotography in other spectrum. So imagine, in global health setting, a CAT scan machine that can fit in a rickshaw. Because as you know, one of the biggest challenges for CAT scan machines is all the high-g high acceleration that's created that makes it very challenging. But now we can start creating electromechanical configurations that allow you to make it extremely portable, and also deal with any dynamic calibrations on the fly.
We can use femtophotography with Wi-Fi cameras. We're all excited about, maybe, a health meter that tells you something interesting, interact with it-- but you don't want to put cameras in your washroom-- in your bathroom. So what if we can just interact with it with just ambient Wi-Fi? And so we have a whole bunch of techniques that we have developed that explores the time of flight of the Wi-Fi signal, and be able to interact with that as well. We have some other projects-- looking at the ear, for example-- and this is our scientist, Anshuman Das. And we can create new autoscopes that can create microscopic changes in the profile of the tympanic membrane, or to look at for throat and tonsils-- and again, look at dynamic variations at microscopic scales.
And one of the projects we spun out of my lab is called EyeNetra, which is a phone that snaps into a binocular. You look through it, click on a few buttons, and it gives you a prescription for your eyeglasses-- your near-sightedness, your far-sightedness, and astigmatism. And as a bonus, you can also screen for cataracts. So we spun it out about three years ago, and has already sold a couple of million dollars in units. You can go to EyeNetra.com.
And the interesting part of this project is that we started this about three years ago, and now this looks like virtual reality for measuring eyeglasses prescription. So now you can just say, hey, just use Oculus to get prescriptions for your eyeglasses and many other eye conditions. So we're going to see this very interesting fusion of other emerging technologies with health diagnostics, as well.
And then we went one step further and created this EyeSelfie solution. Taking measurements of your retina is like a passenger driving a car. It's much easier if you're steering the vehicle. And same thing with EyeSelfie-- if you want to take a picture of your retina, it turns out it's much easier to create an instrument that you hand over to the patient directly-- or the subject directly-- and then they can just take a selfie of the retina.
And then of course, we can do predictive analytics based on the hemodynamics. And we have created an AI platform called OpenDr, that anybody can subscribe to, built by one of my students, Tristan Swedish. And we are working with many partners now to bring in-- the first target is diabetic retinopathy. Because it's a little straightforward problem to solve using machine learning and artificial intelligence, in other words, the API in public domain as well on our MIT website.
So I'm really delighted to have a talented team sitting in Cambridge and solving some amazing problems out there. I'm really blessed. But when it comes to dozens and dozens of ideas that we're generating and creating, we quickly realize that by using the traditional model of a master's student or a PhD student and getting them out there, it's going to take a lot of time. So why don't we think about the problems in global health in a very different way?
And so over the last four or five years, we have tried a few models on how we can have an impact for global health. And I encourage you to go to our website and look at many projects that are going on. We have 12 different solutions that are being tried and being deployed. So it's pretty exciting how some of these models have failed and others have succeeded in moving forward.
But the main lesson in solving global health challenges is that an innovator is usually not an entrepreneur. It's a sufficiently complex problem. And start-ups may or may not be the right way to make an impact. So if you just use venture funding model and business plan as, somehow, a Band-Aid to go from invention to impact, it simply doesn't work.
So that takes us to the second problem, which is, how do we make the invisible visible to really discover and unleash the talented minds? And the platform that really seems to be working for us is called REDX. So instead of online learning or peer-to-peer learning, it's a peer-to-peer intervention platform. And I'm very delighted that the Lemelson-MIT award is allowing us to amplify this, in collaboration with them.
And here are just some snapshots of this peer-to-peer invention platform working at MIT-- also this REDX platform working at LVP. We are looking at solutions for the blind, for low vision, and superhuman vision. Some work going on in Mumbai-- we're looking at oral health, sleep disorders, or hearing challenges. And it's really a platform for young innovators to start innovating for billions without getting confused with the traditional startup methodology, which is forming teams, identifying problems on their own, finding the stakeholders, navigating the path, and even creating a business plan. They really don't know most of these things. So there's a lot of hidden talent that we want to bring forward
And in that sense, REDX is really a platform that we have created that's flipping the venture funding process. So I really encourage you to come and take a look at us. So what do you think about the lines? Ah. Just when you think the invisible has become visible, then the world will change. And in global health, we'll have yet another question. Thank you.
[APPLAUSE]
[MUSIC PLAYING]