Neuroscientists Take Important Step toward Mind Reading

A new computer algorithm can guess what you are looking at based on brain activity alone

Join Our Community of Science Lovers!

Legions of science-fiction authors have imagined a future that includes mind-reading technology. Although the ability to play back memories like a movie remains a distant dream, a new study has taken a provocative step in that direction by decoding neural signals for images.

Neuroscientist Kendrick Kay and his colleagues at the University of California, Berkeley, were able to successfully determine which of a large group of never-before-seen photographs a subject was viewing based purely on functional MRI data. By analyzing fMRI scans of viewers as they looked at thousands of images, Kay’s team created a computer model that uses picture elements such as angles and brightness to predict the neural activity elicited by a novel black-and-white photograph. Then the researchers scanned subjects while showing them new snapshots. Most of the time Kay’s model could single out which image the subject was viewing by matching its prediction of brain activity to the actual activity measured by the fMRI scanner, although very similar pictures tended to baffle the program.

Kay’s reproduction of the age-old “pick a card, any card” trick is intriguing to visual neuroscience researchers because of his algorithm’s versatility. Perhaps more interesting to science-fiction buffs is Kay’s opinion that someday his algorithm might perform “at least some degree of [image] reconstruction” based on fMRI data. Starting from brain activity alone, his model should be able to deduce, for example, an image’s overall brightness. The team has not yet studied the model in this capacity, however; Kay says it is too early to gauge exactly how much information the program can glean from a brain scan.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


As for truly reading people’s thoughts, Kay does not foresee anything of that nature in this century. Technological improvement, he explains, may yield piles of brain data. Without sufficient insight into the brain’s workings, however, we will have no idea what it all means.

This story was originally printed with the title, "Can You Read My Mind?".

SA Mind Vol 19 Issue 3This article was published with the title “Neuroscientists Take Important Step toward Mind Reading” in SA Mind Vol. 19 No. 3 (), p. 11
doi:10.1038/scientificamericanmind0608-11

It’s Time to Stand Up for Science

If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.

I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.

If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.

In return, you get essential news, captivating podcasts, brilliant infographics, can't-miss newsletters, must-watch videos, challenging games, and the science world's best writing and reporting. You can even gift someone a subscription.

There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.

Thank you,

David M. Ewalt, Editor in Chief, Scientific American

Subscribe