How long until we see this kind of thing?
There are two missing pieces: One is a long-term interface to the brain and spinal cord. We can put electrodes in the brain but we have yet to make them last more than a few years. A brain has this way of insulating things stuck in it, which is a good thing. Second, we just don't know enough about what the brain and spinal cord are doing to make this exoskeleton. If it's 2154, there's a chance we might have figured that out; 2054, I think not.
Another problem with the exoskeleton is power. Our bodies are really power efficient. When we walk we use very little power—we just fall from step to step and the Achilles tendon picks up some of that force and pushes us forward. We can walk for miles and miles. The Raytheon exoskeleton has this big tether coming off the back of it because it consumes a tremendous amount of energy. The battery technology is not there yet. Nuclear engineers have a nuclear power supply that is very small that could power something like [Elysium's] exoskeleton for 100 years, but if you ever breach it, you have a serious nuclear emergency.
But you've already wired together insects and robots?
I build hybrid robots that involve insect brains. Interfacing insects to robots started with hawk moths. They're cheap, and it's easy for me to get them. My first experiment was taking a hawk moth [and] putting an electrode in its brain neuron that looked for left and right motion. You could make a robot turn left or right depending on the hawk moth. It was just to show it could be done, but it turned out to be pretty difficult to do.
Then there's dragonflies. Dragonflies are awfully good at detecting small, moving targets. They live for several years as aquatic organisms and come out as adults for only eight to 12 weeks. All they care about is finding food and mating. They are predatory machines. They are looking for other flying objects and they either want to eat it or mate with it. You can use them as a living visual sensor and guide our robots to a small, moving target.
I'm taking a praying mantis and giving it a body that's much bigger than its own and the ability to control that body. It's like giving it a massively large exoskeleton to see whether it can learn to control it. Flight behavior is relatively simple. I'm interested in the praying mantis because it has very complex walking behavior and I want to see if it can transfer that to a robot.
What applications do you foresee?
Brain tissue is really good at some things that regular computers are not. Processing sensory info is a good example. We are not able to build visual sensors as good at detecting small moving objects as dragonflies and certainly not with that level of power consumption. So what if you genetically engineer a living visual system and slap it on to a robot? Then you have the best of both worlds: high speed silicon-based processing and neural processing that does what it does best. Take the hawk moth. They can be trained to detect explosives—they can smell them. You hook into the olfactory system of the hawk moth and build a bomb-sniffing robot that has the drive electronics of a regular robot and the nose of a hawk moth. Or take a dragonfly and use its visual system. It's having a robot that has the intelligence of a dragonfly; it can avoid obstacles and fly around.