As a favor to friends in my academic department, the MRC Cognition and Brain Sciences Unit in Cambridge, UK, I’ve frequently been a guinea pig in the fMRI scanner. Normally I fight valiantly against slumber as the stimuli flash on the small screen in front of me and the hypnotic, high-pitched beeps of the scanner reverberate around me. This time, though, it was very different. This time, my colleague Martin Monti was going to read my mind. As the bed I lay on robotically slid into the giant donut shape of the scanner, I had a strange sense that I was about to be mentally naked.

The task was simple: for each upcoming scan in the session, Martin would ask me either to imagine that I was playing tennis, or that I was moving around the rooms of my home.  These two mental tasks activate very similar regions to actually carrying out the activity – mainly motor regions for tennis and navigation regions for roaming about a building.  And because the brain scans of these two different tasks look totally different, Martin was going to guess my thoughts from my neural activity alone.  When each scan was finished, and Martin accurately ascertained the contents of my imagination, the experience was thrilling and unnerving in equal measure.

In 2006, Adrian Owen, of the same department, has already used a similar technique to prove that a patient with a diagnosis of persistent vegetative state was in fact conscious.  One day soon, it should be possible even to communicate with such patients, where no other means are available, by linking, say, imagining tennis, with a yes answer, and imagining moving around a building with a no answer.  Similar methods are being used to train patients with motor neurone disease (also known as amyotrophic lateral sclerosis, or ALS), whose motor functions are slowly failing, to control a communication interface by thought alone.

This feat of scientific telepathy was largely unheard of a decade ago. But now, in various guises, it is taking over the field. What has caused such a revolution in brain-scanning? The simple answer is that over the last five years, the way that many researchers analyze their brain-scanning data has radically shifted.

fMRI datasets can be vast.   There might be 100,000 3D pixels, called voxels, in each brain activity image, with a new image being taken every 2 seconds – for up to an hour.   Then there will be around 20 subjects in a study, which equates to perhaps 4 billion different voxels to examine in total. The traditional, common way of breaking this problem down is to go back to just one of those 100,000 voxels in each image, in one location across subjects, and see whether it will rise or fall in activity over time, in accord with the mental fluctuations under study.

But some scientists have recently suspected that analyzing brain-scans in this way involves throwing away vast amounts of useful data, because we are ignoring how these voxels might be working together, in a pattern of activity, to represent information. The old, bog-standard, method is like being handed a fuzzy photo and concluding only that the bright regions are the most important, because they have the most light. Instead, the new method would look at the same photo, at the textures and contrasts, and how the shapes are built up, before recognizing the depiction of Marilyn Monroe in her prime.

This new, far more sensitive, method, known as multivariate pattern analysis (MVPA), effectively works like a form of AI. The program will learn to link some mental event with a particular pattern of brain activity, and then make predictions about how new brain data relates to mental states, based on these prior lessons. It is these predictions that now allow neuroscientists potentially to read minds.

The main early successes were in the tricky, subtle field of studying how brain activity generates consciousness. If competing images are presented to each eye, using a technique called binocular rivalry, we consciously perceive only one image at a time, even though our eyes are viewing both images. Geraint Rees and colleagues at University College London demonstrated via MVPA that the pattern of primary visual cortex activity has little to with our conscious image, but instead reflects the raw input from the eyes. To uncover brain regions that actually do reflect consciousness, they found, you have to center on later, more complex visual regions. Standard brain-imaging analysis methods simply lacked the power to detect such results.

More intriguingly, Geraint Rees and his colleague John-Dylan Haynes followed up these findings by using MVPA centered on the primary visual cortex to decode the orientation of lines, even though the volunteer was completely unaware of this visual detail. Again, these results paint a picture of our primary visual cortex as little more than the brain’s copy of what our eyes see, with the information being divided up to be processed in more interesting, conscious ways in later visual brain regions.

It wasn’t long before these powerful MVPA methods branched out into territory far removed from perception. Although ethically contentious progress is being made using MVPA to predict when a person is lying, considerably more profound results are appearing in another field. Last year, John-Dylan Haynes, now at the Bernstein Center for Computational Neuroscience in Berlin, carried out a very simple task – just choose whether to press the left or right button while in the fMRI scanner. When Haynes set his MVPA algorithm to learn which patterns corresponded with this decision, astoundingly he found strong signals in the prefrontal and parietal cortex up to 10 seconds before the volunteer had consciously intended to act. Does this mean we have no free will? Or does free will only kick in for more complex decisions? More research is required before these questions can be adequately answered.

One drawback of many fMRI studies is that the stimuli are so artificial as to limit their generalization to the real world. Because of the increased flexibility and power of MVPA methods, though, showing any old photos or movies in the scanner is quite feasible. For instance, Eleanor Maguire from University College London, and co-workers, have recently used a combination of MVPA and high resolution imaging to connect patterns in certain sub-regions of the hippocampus with long-term memory acquisition for naturalistic videos.

So far, studies have involved using an algorithm to guess one of a small handful of alternative mental states, when presented with a certain brain pattern. This is a far cry from genuine brain-reading, where you simply look at the neural activity and know what the person is thinking, without being given the vital clue of the shortlist of possibilities. One lab, though, seems to be edging ever closer to this more impressive aim. Jack Gallant, of the University of California at Berkeley, published results last year showing that his own flavor of pattern recognition programs can guess which one of a thousand pictures the person just viewed. And at the Society of Neuroscience conference two months ago he presented data that went many steps further – actually reconstructing what the volunteer was seeing from their visual cortex activity, after they viewed a series of movie trailers. For instance, the program would spit out an outline of a white torso just when a man in a white shirt was shown. The data hasn’t yet been published in a peer reviewed journal, and the reconstruction is at a very crude stage, so caution should be exercised. But nevertheless such provisional progress suggests tantalizing possibilities in future years, such as the ability to “read off” a crime witness’s memories, or visual imagery in dreams.

Other scientists are skeptical of the extent of progress that can be made with MVPA, though. Although all these studies demonstrate significant predictions, this regularly means that the computer’s guess is a hair’s breadth above chance. Many studies that rely on MVPA to pick between two alternatives score around 60% accuracy, for instance, when a blind guess would give 50% - a useful improvement over chance, but hardly telepathy. Although the experiment I mentioned at the start of this article is far more robust, partly because of far more data for each guess, if I were mischievously to imagine playing baseball instead of tennis, or navigating around my childhood home instead of my current one, neither the prediction program, nor the experimenter, would have had a clue that I was breaking the rules.

In the end, what the fMRI scanner is picking up is a very noisy, indirect measure of neural activity, and this creates inherent limits on what’s possible. But even if it were direct, a single voxel would still represent the collective activity of many tens of thousands of neurons. Technological advances in MRI physics may be on the horizon, allowing more reliable, higher resolution measurements to be made. Until then, the ability of fMRI, via pattern analysis, to make fascinating discoveries about the brain is flourishing, even if true mind-reading technology remains – largely – the realm of science fiction.