This interface has for a full decade used brain-derived signals to generate movements of robotic arms, hands and legs in animal experiments. A critical breakthrough occurred last year when two monkeys in our lab learned to exert neural control over the movements of a computer-generated avatar arm that touched objects in a virtual world but also provided an “artificial tactile” feedback signal directly to each monkey's brain. The software allowed us to train the animals to feel what it was like to touch an object with virtual fingers controlled directly by their brain.
The Walk Again consortium—assisted by its international team of neuroscientists, roboticists, computer scientists, neurosurgeons and rehabilitation professionals—has begun to take advantage of these animal research findings to create a completely new way to train and rehabilitate severely paralyzed patients in how to use brain-machine interface technologies to regain full-body mobility. Indeed, the first baby steps for our future ceremonial kicker will happen inside an advanced virtual-reality chamber known as a Cave Automatic Virtual Environment, a room with screens projected on every wall, including the floor and ceiling. After donning 3-D goggles and a headpiece that will noninvasively detect brain waves (through techniques known as electroencephalography—EEG—and magnetoencephalography), our candidate kicker—by necessity a lightweight teenager for this first iteration of the technology—will become immersed in a virtual environment that stretches out in all directions. There the youngster will learn to control the movements of a software body avatar through thought alone. Little by little, the motions induced in the avatar will increase in complexity and will ultimately end with fine-motor movements such as walking on a changing terrain or unscrewing a virtual jelly jar top.
Plugging into Neurons
The mechanical movements of an exoskeleton cannot be manipulated as readily as those of a software avatar, so the technology and the training will be more complicated. It will be necessary to implant electrodes directly in the brain to manipulate the robotic limbs. We will need not only to place the electrodes under the skull in the brain but also to increase the number of neurons to be “read” simultaneously throughout the cortex. Many of the sensors will be implanted in the motor cortex, the region of the frontal lobe most readily associated with the generation of the motor program that is normally downloaded to the spinal cord, from which neurons directly control and coordinate the work of our muscles. (Some neuroscientists believe that this interaction between mind and muscle may be achieved through a noninvasive method of recording brain activity, like EEG, but that goal has yet to be practically achieved.)
Gary Lehew in my group at Duke has devised a new type of sensor: a recording cube that, when implanted, can pick up signals throughout a three-dimensional volume of cortex. Unlike earlier brain sensors, which consist of flat arrays of microelectrodes whose tips record neuronal electrical signals, Lehew's cube extends sensing microwires up, down and sideways throughout the length of a central shaft.
The current version of our recording cubes contains up to 1,000 active recording microwires. Because at least four to six single neurons can be recorded from each microwire, every cube can potentially capture the electrical activity of between 4,000 to 6,000 neurons. Assuming that we could implant several of those cubes in the frontal and parietal cortices—areas responsible for high-level control of movement and decision making—we could obtain a simultaneous sample of tens of thousands of neurons. According to our theoretical software modeling, this design would suffice for controlling the flexibility of movement required to operate an exoskeleton with two legs and to restore autonomous locomotion in our patients.