As a favor to friends in my academic department, I have frequently been a guinea pig in the functional magnetic resonance imaging (fMRI) scanner. In most of these cases, I fight valiantly against slumber as the stimuli flash on the small screen in front of me and the hypnotic, high-pitched beeps of the scanner reverberate all around. This time, though, it was different. Martin Monti, a fellow neuroscientist at the MRC Cognition and Brain Sciences Unit in Cambridge, England, was going to read my mind. As the bed I lay on slid robotically into the giant doughnut-shaped scanner, I had a strange sensation that I was about to be seen naked—mentally, at least.

The task was simple: Monti would ask me questions—did I have any siblings, did I think England was going to win the soccer match that night, and so on. If I wanted to answer “yes,” then I would imagine myself playing tennis, activating a known set of motor regions in my brain by doing so. If I wanted to answer “no,” then I was to imagine navigating around the rooms of my home, activating an entirely different set of areas involved in scene perception. Given that each scan—and thus each of my yes or no answers—took five minutes, the conversation was not the most riveting I had ever had, but when Monti accurately guessed my response every time, it was nonetheless thrilling and unnerving in equal measure.

Last year Monti and others used this technique on a patient diagnosed with a permanent vegetative state, who showed few outward signs of awareness. The researchers demonstrated that the patient was still conscious and could even communicate, as they reported in the New England Journal of Medicine on February 18, 2010. The patient responded to questions with “yes” and “no” the same way I did, by thought alone. No other means currently exist that could have shown that a fully aware, communicative mind was trapped in the patient’s unresponsive body. [For a preview of this technique as it was being developed, see “Freeing a Locked-In Mind,” by Karen Schrock; Scientific American Mind, April/May 2007.]

Such a feat of scientific telepathy was unheard of a decade ago. But now “mind reading” in various guises is beginning to dominate the field of neuroscience. What caused this revolution? Over the past five years many scientists have changed the way they analyze the data they gather from brain scans. Using a new information-crunching technique, they have deciphered brain activity to reveal not only the content of conscious thought but also information from participants’ unconscious minds—even re-creating the images in movies they are watching. The new technique has led to insights into the intricate workings of memory and the complex process of decision making. And the method is still in its infancy—the most exciting breakthroughs are no doubt still to come.

Seeing the Forest and the Trees
The quest to get inside other people’s heads is far from new. Polygraph machines represent a century’s worth of persistent attempts to use technology to decode thoughts. But lie detectors work indirectly—they identify only the stress response that may or may not be a sign of dishonesty. To truly read thoughts, scientists need to directly decode brain activity. Brain-computer interfaces are progressing rapidly on this front, using electroencephalography (EEG) or electrodes implanted in the brain to detect neural signals and translate them into commands to move robotic arms or cursors on a computer screen. Researchers are using such technology today to train patients with amyotrophic lateral sclerosis, or Lou Gehrig’s disease, whose ability to move is slowly failing, to control a communication interface by thought alone. [For more on brain-computer interfacing, see “Chips in Your Head,” by Frank W. Ohl and Henning Scheich; Scientific American Mind, April/May 2007.]

But this type of signal decoding, though hugely important in medicine, has limited mind-reading potential; it requires that users practice extensively to direct their thoughts in such a way that a computer can translate their brain signals into commands for motion or speech. Decoding a range of thought processes without resorting to heavy training regimes requires a very different approach.

Enter fMRI. Developed in the 1990s, this imaging technology offered a radical new opportunity to peer inside the mind as it worked, by detecting blood flow to active brain areas. But fMRI data sets can be vast. Each image of activity might require 100,000 three-dimensional pixels, called voxels, with a new image being taken every two seconds, for up to an hour. Multiply that by around 20 subjects in a study, and you end up with perhaps four billion voxels to examine. The traditional way of solving this problem is to focus on just one of those 100,000 voxels in each image, in one location in all the subjects, and to see whether that voxel rises or falls in activity over time, in accord with the mental fluctuations under study.

But analyzing brain scans in this way involves throwing away vast amounts of useful data by ignoring how these voxels might be working together, in a pattern of activity, to represent information. The old method is comparable to looking at a fuzzy photograph and concluding that only the bright regions are important. The new method would consider all the textures and contrasts of the fuzzy photograph, gauging how they relate to one another to create shapes and figures—and ultimately recognize a picturesque landscape or a smiling face.

This new, far more sensitive method, known as multivariate pattern analysis (MVPA), is effectively a form of artificial intelligence. The program creates algorithms that link mental events with specific patterns of brain activity—for instance, when told a person is thinking about tennis, it detects a corresponding signal in the pattern of activity among motor area voxels—and then, based on those assessments, it makes predictions about how new brain data relate to a person’s mental state. Each time the program spots an identifiable pattern of brain signals, it makes a prediction about what the person is thinking about—whether it’s playing tennis or, if the telltale brain activity takes a different form, something else entirely. These predictions potentially allow neuroscientists to read minds.

Locating Consciousness
The main early successes of MVPA came in the tricky attempt to study how brain activity generates consciousness. For example, how do people make visual sense of the world around them? In 2005 neuroscientist Geraint Rees and his colleagues at University College London investigated a well-known effect known as binocular rivalry. When different images are presented to each eye, people consciously perceive only one at a time, even though their eyes are viewing both images. Awareness tends to alternate between the two images every 15 seconds or so. Using MVPA, the Rees team pinpointed what is happening in the brain as the images flip back and forth. They learned that activity in the primary visual cortex, the first cortical area that responds when we look at something, consists of raw input that has little to do with the image we consciously see. Other, more complex, visual regions that become active later in the chain of events turn out to be the areas that create the image that people report seeing in any given moment. Standard brain-imaging analysis methods lacked the power to detect such results.

More intriguing, Rees and his colleague John-Dylan Haynes, now at the Bernstein Center for Computational Neuroscience in Berlin, used MVPA in 2005 to read subjects’ unconscious minds. They showed volunteers pictures of a black disk marked with dashed white lines that were oriented in one of two directions. The disk was masked most of the time by a second disk that had crisscrossing lines in both directions. When the mask disappeared, it revealed the target disk for only 17 milliseconds at a time—too short a span for the volunteers to consciously register the direction of the dashed lines. And, as expected, their guesses at the orientation of the lines on the target disk had only chance-level accuracy (50 percent). Using MVPA to study the primary visual cortex, however, the scientists were able to learn which line orientation a subject was seeing—even though the subject himself did not know! As in the previous study, the results suggest the primary visual cortex is a kind of brain-only version of what the eyes see; that information is later processed by other visual brain regions in more conscious ways.

It wasn’t long before these powerful MVPA methods branched out into territory far removed from consciousness perception. Although ethically contentious progress is being made using MVPA to predict when a person is lying [see “Portrait of a Lie,” by Matthias Gamer; Scientific American Mind, February/March 2009], considerably more profound results are appearing in another field: decision making.

In 2008 Haynes asked volunteers to carry out a simple task—to choose whether to press the left or right button on a remote control while in the fMRI scanner. When Haynes set his MVPA algorithm to learn which patterns corresponded with this decision, he was astounded to find strong signals in the prefrontal and parietal cortices (areas involved in processing novel or complex goals) up to 10 seconds before the volunteer consciously decided to act. This result has deep ramifications. Does it mean that we have no free will? Or does free will kick in only for more complex decisions? More research will be needed to answer these questions—but it is exciting that MVPA has moved such concerns, once strictly the domain of philosophy, into the province of scientific study.

I Know What You’re Seeing
One drawback of many fMRI studies is that the stimuli are so artificial—say, dashed white lines on a black disk—that their generalization to the real world is limited. But now, because of the flexibility and power of MVPA methods, it is feasible to show photographs or videos in the scanner and analyze the resulting brain activity. Such methods have enabled scientists to refine their understanding of the basic workings of memory. For instance, neuroscientist Eleanor Maguire, also at University College London, and her co-workers recently used MVPA to identify patterns in the part of the brain that stores memories, the hippocampus. As reported in Current Biology on March 23, the researchers showed volunteers three seven-second movie clips depicting women doing everyday activities (for instance, drinking from a coffee cup, then throwing it away). The volunteers then recalled each of the clips while the researchers scanned their brains. Using MVPA, the researchers were able to predict which clip each volunteer was recalling at any given time. They also discovered that particular areas within the hippocampus, including the right and left anterior and the right posterior portions, are especially important for storing these so-called episodic memories.

Impressive though the results have been, the studies to date are relatively crude, capable of identifying one of a handful of mental states (tennis game or home layout?). This is a far cry from genuine mind reading, where looking at neural activity would reveal a person’s thoughts without referring to a preset shortlist of possibilities. One lab, though, seems to be edging closer. Neuroscientist Jack Gallant of the University of California, Berkeley, published results in 2008 showing that his pattern-recognition programs can guess which of 1,000 pictures the person just viewed—a dramatic leap from the two or three options other algorithms have learned to parse. And at the Society for Neuroscience conference last fall, he presented data that went much further—actually reconstructing, from the activity in the visual cortex, what volunteers were seeing when they watched a series of movie trailers. For instance, at the very moment that a man in a white shirt appeared on screen, the program would spit out an outline of a white torso. These data have not yet been published in a peer-reviewed journal, and the reconstruction is at a preliminary stage, so the results should be viewed cautiously. Nevertheless, such provisional progress suggests tantalizing possibilities, such as the ability to “read off” a crime witness’s memories or record and play back the visual imagery in dreams.

Some scientists remain skeptical about the promise of MVPA. The studies that demonstrate that the technique makes accurate predictions are statistically significant, but that often means that the computer’s guess is a hair’s breadth above chance. Many studies that rely on MVPA to pick between two alternatives score around 60 percent accuracy, for instance, when a blind guess would give 50 percent—a useful improvement, but hardly telepathy. The yes-or-no experiment I took part in is far more robust, partly because it gathers a large amount of data before assessing the guesses. Yet if I were mischievously to imagine playing baseball instead of tennis or navigating around my childhood home instead of my current one, neither the prediction program, nor the experimenter, would have a clue that I was breaking the rules.

In the end, what the fMRI scanner shows is a noisy, indirect measure of neural activity—blood flow is thought to correlate to activity, but it may not be a perfect proxy. The imperfect nature of the data creates inherent limits to what the technology can achieve. And even if fMRI provided a direct measure, it would still be an approximate one: a single voxel represents the collective activity of many tens of thousands of neurons. Still, technological advances in MRI physics may be on the horizon, enabling more reliable, higher-resolution measurements and nudging true mind reading out of the realm of science fiction.