Brain-controlled prosthetic devices have the potential to dramatically improve the lives of people with limited mobility resulting from injury or disease. To drive such brain-computer interfaces, neuroscientists have developed a variety of algorithms to decode movement-related thoughts with increasing accuracy and precision. Now researchers are expanding their tool chest by borrowing from the world of cryptography to decode neural signals into movements.
During World War II, codebreakers cracked the German Enigma cipher by exploiting known language patterns in the encrypted messages. These included the typical frequencies and distributions of certain letters and words. Knowing something about what they expected to read helped British computer scientist Alan Turing and his colleagues find the key to translate gibberish into plain language.
Many human movements, such as walking or reaching, follow predictable patterns, too. Limb position, speed and several other movement features tend to play out in an orderly way. With this regularity in mind, Eva Dyer, a neuroscientist at the Georgia Institute of Technology, decided to try a cryptography-inspired strategy for neural decoding. She and her colleagues published their results in a recent study in Nature Biomedical Engineering.
“I’ve heard of this approach before, but this is one of the first studies that’s come out and been published,” says Nicholas Hatsopoulos, a neuroscientist at the University of Chicago, who was not involved in the work. “It’s pretty novel.”
Existing brain-computer interfaces typically use so-called ‘supervised decoders.’ These algorithms rely on detailed moment-by-moment movement information such as limb position and speed, which is collected simultaneously with recorded neural activity. Gathering these data can be a time-consuming, laborious process. This information is then used to train the decoder to translate neural patterns into their corresponding movements. (In cryptography terms, this would be like comparing a number of already decrypted messages to their encrypted versions to reverse-engineer the key.)
By contrast, Dyer’s team sought to predict movements using only the encrypted messages (the neural activity), and a general understanding of the patterns that pop up in certain movements. Her team trained three macaque monkeys to either reach their arm or bend their wrist to guide a cursor to a number of targets arranged about a central point. At the same time, the researchers used implanted electrode arrays to record the activity of about 100 neurons in each monkey’s motor cortex, a key brain region that controls movement.
Over the course of many experimental trials, researchers gathered statistics about each animal’s movements, such as the horizontal and vertical speed. A good decoder, Dyer says, should find corresponding patterns buried in the neural activity that map onto patterns seen in the movements. To find their decoding algorithm, the researchers performed an analysis on the neural activity to extract and pare down its core mathematical structure. Then they tested a slew of computational models to find the one that most closely aligned the neural patterns to the movement patterns.
When the researchers used their best model to decode neural activity from individual trials, they were able to predict the animals’ actual movements on those trials about as well as some basic supervised decoders. “It’s a very cool result,” says Jonathan Kao, a computational neuroscientist at the University of California, Los Angeles, who was not involved in the study. “My prior thought would have been that having the moment-by-moment information of the precise reach, knowing the velocity at every moment in time, would have allowed you to build a better decoder than if you just had the general statistics of reaching.”
Because Dyer’s decoder only required general statistics about movements, which tend to be similar across animals or across people, the researchers were also able to use movement patterns from one monkey to decipher reaches from the neural data of another monkey—something that is not feasible with traditional supervised decoders. In principle, this means that researchers could reduce the time and effort involved in collecting meticulously detailed movement data. Instead, they could acquire the information once, and re-use or distribute those data to train brain-computer interfaces in multiple animals or people. “It could be very useful to the scientific community and to the medical community,” Hatsopoulos says.
Dyer calls her work a proof of concept for using cryptographic strategies to decode neural activity, and notes that much more work must be done before the method can be used widely. “By comparison to state-of-the-art decoders, this is not yet a competitive method,” she says. The algorithm could potentially be strengthened by feeding it signals from even more neurons, or providing additional known features of movements, such as the tendency of animals to produce smooth motions. To be practical for guiding prosthetic devices, the approach would also have to be adapted to decode more complex, natural movements—a non-trivial task. “We’ve only kind of scratched the surface,” Dyer says.