Brain Implant Lets Man with ALS Speak and Sing with His ‘Real Voice’

A new brain-computer interface turns thoughts into singing and expressive speech in real time

Human brain with highlighted precentral gyrus, illustration

The motor cortex (orange, illustration). Electrodes implanted in this region helped to record the speech-related brain activity of a man who could not speak intelligibly.

Kateryna Kon/Science Photo Library/Alamy Stock Photo

Join Our Community of Science Lovers!

A man with a severe speech disability is able to speak expressively and sing using a brain implant that translates his neural activity into words almost instantly. The device conveys changes of tone when he asks questions, emphasizes the words of his choice and allows him to hum a string of notes in three pitches.

The system — known as a brain–computer interface (BCI) — used artificial intelligence (AI) to decode the participant’s electrical brain activity as he attempted to speak. The device is the first to reproduce not only a person’s intended words but also features of natural speech such as tone, pitch and emphasis, which help to express meaning and emotion.

In a study, a synthetic voice that mimicked the participant’s own spoke his words within 10 milliseconds of the neural activity that signalled his intention to speak. The system, described today in Nature, marks a significant improvement over earlier BCI models, which streamed speech within three seconds or produced it only after users finished miming an entire sentence.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


“This is the holy grail in speech BCIs,” says Christian Herff, a computational neuroscientist at Maastricht University, the Netherlands, who was not involved in the study. “This is now real, spontaneous, continuous speech.”

Real-time decoder

The study participant, a 45-year-old man, lost his ability to speak clearly after developing amyotrophic lateral sclerosis, a form of motor neuron disease, which damages the nerves that control muscle movements, including those needed for speech. Although he could still make sounds and mouth words, his speech was slow and unclear.

Five years after his symptoms began, the participant underwent surgery to insert 256 silicon electrodes, each 1.5-mm long, in a brain region that controls movement. Study co-author Maitreyee Wairagkar, a neuroscientist at the University of California, Davis, and her colleagues trained deep-learning algorithms to capture the signals in his brain every 10 milliseconds. Their system decodes, in real time, the sounds the man attempts to produce rather than his intended words or the constituent phonemes — the subunits of speech that form spoken words.

“We don’t always use words to communicate what we want. We have interjections. We have other expressive vocalizations that are not in the vocabulary,” explains Wairagkar. “In order to do that, we have adopted this approach, which is completely unrestricted.”

The team also personalized the synthetic voice to sound like the man’s own, by training AI algorithms on recordings of interviews he had done before the onset of his disease.

The team asked the participant to attempt to make interjections such as ‘aah’, ‘ooh’ and ‘hmm’ and say made-up words. The BCI successfully produced these sounds, showing that it could generate speech without needing a fixed vocabulary.

Freedom of speech

Using the device, the participant spelt out words, responded to open-ended questions and said whatever he wanted, using some words that were not part of the decoder’s training data. He told the researchers that listening to the synthetic voice produce his speech made him “feel happy” and that it felt like his “real voice”.

In other experiments, the BCI identified whether the participant was attempting to say a sentence as a question or as a statement. The system could also determine when he stressed different words in the same sentence and adjust the tone of his synthetic voice accordingly. “We are bringing in all these different elements of human speech which are really important,” says Wairagkar. Previous BCIs could produce only flat, monotone speech.

“This is a bit of a paradigm shift in the sense that it can really lead to a real-life tool,” says Silvia Marchesotti, a neuroengineer at the University of Geneva in Switzerland. The system’s features “would be crucial for adoption for daily use for the patients in the future.”

This article is reproduced with permission and was first published on June 11, 2025.

Miryam Naddaf is a science writer based in London.

More by Miryam Naddaf

First published in 1869, Nature is the world's leading multidisciplinary science journal. Nature publishes the finest peer-reviewed research that drives ground-breaking discovery, and is read by thought-leaders and decision-makers around the world.

More by Nature magazine

It’s Time to Stand Up for Science

If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.

I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.

If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.

In return, you get essential news, captivating podcasts, brilliant infographics, can't-miss newsletters, must-watch videos, challenging games, and the science world's best writing and reporting. You can even gift someone a subscription.

There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.

Thank you,

David M. Ewalt, Editor in Chief, Scientific American

Subscribe