I'm tapping away at my laptop as colored shapes appear onscreen. I'm supposed to hit the right arrow key—and fast—if the new shape matches the previous one and the left arrow key if it doesn't. Next, in a test of attention, I'm throwing switches on virtual tracks to direct colored trains into appropriately colored stations. It's a little trickier but not much more interesting; I get bored, and my mind wanders. Suddenly, I have two trains about to roll into the wrong stations, with more emerging all the time. It does not end well. I am determined to redeem myself, and I start the next game with clenched teeth. Grids of squares appear, some of which briefly change their shade, and I have to remember their positions. The grids get larger and harder to take in as I play, but I rack up a big score anyway. Final verdict: I'm in the 92nd percentile for memory, the 80th percentile for speed—and the 13th for attention. I suppose the problem was lack of attention, but it didn't help that I am color-blind.

I have just taken Lumosity's Fit Test, a free online assessment and lure for new customers. Lumos Labs, which created the Lumosity program, is one of the biggest players in the rapidly growing “brain-training” industry, alongside outfits with such enticing monikers as CogniFit, MindSparke, Cogmed, HAPPYneuron, Posit Science and Jungle Memory. Market research firm SharpBrains estimates global spending on brain health technology, including both software and “biometrics” such as electroencephalogram headsets, was around $1.3 billion in 2013, up from $210 million in 2005. It predicts that the figure will hit $6 billion by 2020.

Most early clients were schools and health care providers buying programs such as those offered by Cogmed (which claims to treat attention-deficit/hyperactivity disorder, or ADHD, and other learning problems), but private consumers now make up the largest, fastest-growing segment, led by baby boomers. They are drawn by ads that promise boosts to mental performance and fitter brains, with companies claiming that training can help customers maintain mental function into old age or can even prevent dementia. Such appeals are bound to have a big impact when, according to an AARP survey, “staying mentally sharp” is a greater concern than physical health for people age 50 and older. The ads usually also boast that products are “designed by leading neuroscientists” and are “scientifically tested.”

But no sooner had brain training hit the market, roughly a decade ago, when the clamor began. Skeptics pointed out that many studies suffered from serious flaws and raised questions about the evidence of benefits. Media reports soon began denouncing the industry—“Brain Games Are Bogus,” proclaimed the New Yorker in a 2013 article. And last October a group of more than 70 neuroscientists working under the auspices of the Center on Longevity at Stanford University and the Max Planck Institute for Human Development in Berlin issued a report stating: “We object to the claim that brain games offer consumers a scientifically grounded avenue to reduce or reverse cognitive decline when there is no compelling scientific evidence to date that they do.”

Research into cognitive training is now a sea of conflicting studies and contradictory claims. A variety of reasons exist for the divergence of opinion, but the root of the problem is the complexity of learning and intelligence—and hence of attempts to measure aspects of cognition. And in a field where even the most talented, earnest and diligent researchers regularly fail to exclude all possible sources of bias or error, research studies with industry ties will draw special scrutiny because when science and commerce intersect, truth can be a casualty.

Yet two conclusions do emerge from the murk: Training the brain in any meaningful way, especially as we age, is very difficult. Making it look like you have succeeded, however, is surprisingly easy.

Make yourself smarter!
Research into cognitive training goes back at least three decades, and then, as now, the holy grail of training was “far transfer.” The science-fiction-esque term refers to an improvement in mental skills significantly beyond the focus of the training activity, including, researchers hope, skills that are broadly useful in real-life tasks.

Early studies succeeded in showing gains only in tasks very similar to the training itself. In 1982, for instance, psychologist Karlene Ball, now at the University of Alabama at Birmingham, and neuroscientist Robert Sekuler, now at Brandeis University, conducted a study in which they trained people to detect slight differences in the direction of moving dots on a screen. The participants got better at discerning ever smaller differences, but the improvements were specific to the direction the dots were moving in the experiment. If the average pathway was rotated more than 45 degrees, the improvements vanished, and presumably the training had little relevance to actual visual acuity tasks such as driving.

Around this time, though, tantalizing clues began to emerge that the brain can change even in old age. Researchers once thought this property, called plasticity, was restricted to critical periods during development. But progress on various medical frontiers, most obviously in the capacity of stroke victims to recover, has provided new evidence of plasticity throughout life.

More relevant for the cognition entrepreneurs are signs that healthy adult brains can change, too. The most famous example is a 2000 MRI study by Eleanor Maguire and her colleagues at University College London, which showed that London taxi drivers, who must master detailed knowledge of the city, had marked differences in the shape of their hippocampus (the region used to store navigational information) compared with noncabbies. The longer the cabbies had been driving, the greater some of those differences were.

But these changes appeared in people who had acquired tremendous amounts of complex, real-world experience. The typical age-related decline in plasticity may actually occur for a reason: it is not an unqualified good. Later in life, neural plasticity is likely less important than neural stability, which lets us hold on to learning and habits we need. Moreover, plasticity is metabolically costly—requiring a lot of energy—so major change in adults does not come easily.

A handful of findings in the early 2000s finally showed that the effects of cognitive training might not be as limited as many had assumed. In 2002 a group led by Torkel Klingberg at the Karolinska Institute in Stockholm trained children with ADHD using “adaptive” memory tasks—those whose difficulty changes with the subject's performance. Adaptive training is based on the widely accepted principle that people learn best when pushed to the edge of their ability, so they get neither too bored nor too frustrated. The kids improved on tests of reasoning and attention compared with a group trained with nonadaptive programs. The team also found some evidence for reductions in ADHD symptoms, noting that the kids were less likely to look away from a task they were performing.

An ideal target for cognitive training is working memory, a measure of our ability to hold and manipulate information in the face of interference. Working memory acts as a kind of mental work space. It is involved in reading and problem solving and correlates with measures of IQ. The link with intelligence in particular inspired psychologists Susanne M. Jaeggi and Martin Buschkuehl, both then at the University of Bern in Switzerland, and their colleagues to develop a task to give working memory a workout. Their “dual n-back” training presents people with two simultaneous streams of information: shapes that appear on a screen and an audio sequence of spoken letters. Participants must indicate whenever a shape or a sound is the same as one presented n items ago. The task adapts to the subject's ability level by changing the value of n.

In a 2008 study, Jaeggi's group divided 34 healthy young adults into four groups that trained for different lengths of time. Psychologists differentiate between “crystallized” intelligence, which involves acquired knowledge, and “fluid” intelligence, which is the ability to reason with new material. Jaeggi and her colleagues evaluated the effects of training with tests of fluid intelligence that asked participants to figure out the relations between abstract shapes. The conclusion: training increased fluid intelligence, and the more people trained, the smarter they became. It seemed that subjects could boost their fluid intelligence with nothing more than hours of practice on a laboratory task. Researchers previously had thought intelligence was pretty much fixed, so this finding made a big splash. The promise of far transfer had materialized.

Jaeggi has never sought to commercialize dual n-back training, but versions of it now crop up in most companies' arsenal of games (MindSparke in particular focuses mainly on n-back training). And a number of game makers have cited the 2008 study as evidence that their brain games are effective—even though Jaeggi distances herself from such claims and was a signatory of the recent consensus statement.

Problems with motivation
No sooner had far transfer appeared to be within reach than critics threw it into doubt again. One of the main concerns involves a central problem in psychology: human beings react in a variety of complex ways when others are studying them. In a recent series of research reviews, psychologists at the Georgia Institute of Technology pointed out that people often change their behavior, usually by improving performance, when they know they are being watched.

And as I discovered myself, motivation can have a big effect on cognitive tasks. Many of the studies, including Jaeggi's, used so-called no-contact control groups, who took the tests at the beginning and end of the study period and had no contact with the researchers in between those times. The approach saves money, but it is inherently problematic because less interaction with the researchers can mean less motivation to perform.

The remedy is to use active control groups, who have the same degree of contact with researchers as the test subjects. And when some researchers did so, the far-transfer effect vanished. In 2013 psychologists Monica Melby-Lervåg of the University of Oslo and Charles Hulme of University College London conducted a meta-analysis that combined data from 23 studies of working memory training. They found a small increase in far-transfer measures of nonverbal reasoning but none at all when considering only studies using active control groups. (Jaeggi and her colleagues argued in a 2014 study that the latter studies failed to reproduce their findings because the test subjects did not fully engage with the training and so did not reap its benefits.)

At the same time that psychologists were looking for behavioral evidence of far transfer, neuroscientists were exploring whether training might induce changes in neural activity, thus demonstrating the biological plasticity believed to underlie benefits. Researchers trying to discern changes in activity typically ask participants to perform a task in a functional MRI scanner, both before and after training. Interpreting these results, however, can be difficult. At issue is whether differences in brain activity reflect genuine changes in cognitive ability or just changes in mental strategy arising from practice. Plus, scientists cannot predict whether a trained brain will show an increase in activity, implying more processing, or a decrease in activity, implying greater efficiency.

Practice makes perfect
With far transfer hard to achieve and demonstrate, much of brain training focuses on “near transfer”: exercises that confer benefits on tasks that use similar skills. Near transfer is less ambitious but also less controversial. Many studies show that training a particular cognitive ability, such as memory, can improve performance in other tasks using that skill even if it does not lead to gains in, say, reasoning tasks.

As it happens, though, showing why performance has improved is not a simple matter. People can get better at any task simply by practicing, so researchers must demonstrate that gains from training involve more than repetition by using tests that differ from the training task. Yet devising a task that taxes one and only one cognitive ability is nigh impossible. Everything we do involves multiple cognitive processes, so the effects of practicing one task can influence performance on others [see “Separate Brain Training Fact from Fiction” below]. The only way researchers can be reasonably confident that improvements reflect real changes in a cognitive ability, rather than improvements in test-taking skills from practice, is to measure each outcome using multiple tests that tax the ability in different ways.

Even better than multiple tests is a set of sophisticated statistical techniques called latent factor measures, so named because they reveal changes in underlying abilities. These methods require both large batteries of tests and big samples. For instance, in 2010 psychologist Florian Schmiedek of the German Institute for International Educational Research and neuroscientists Martin Lövdén, now at the Karolinska Institute, and Ulman Lindenberger of the Max Planck Institute for Human Development used latent factor analysis in one of the most intensive training studies to date. Their regime involved 101 younger and 103 older adults, who performed six tests of perceptual speed, three working memory tests and three episodic memory tests, administered in an average of 101 hour-long sessions over six months. The researchers used 14 measures for outcomes, covering near and far transfer. They did find far-transfer effects to episodic memory and reasoning that were still present two years later. But the effects were very small, and older adults did not show these gains, presumably because of declining plasticity, which suggests that training may be less useful for those who need it most.

Although such studies represent a gold standard, they are rare because the resources required make them both cumbersome and expensive. In a 2014 review of studies of transfer, Schmiedek and his colleagues found that only 7 percent of studies used latent factor measures, and less than a quarter even used multiple measures.

Ironically, solid evidence that brain-training techniques can have measurable real-world benefits would finally emerge in an unconventional setting. Speed-of-processing training is based on a measure referred to as useful field of view—the breadth of space you can take in at a glance—developed almost 30 years ago by Ball of the University of Alabama and psychologist Daniel L. Roenker of Western Kentucky University. The task involves fixating on a central object while noting as rapidly as possible where in the visual periphery other objects appear.

And the measure of performance that verified efficacy? The risk of having a traffic accident. In a 2003 study, Roenker and his colleagues found that training resulted in a drop of about a third in the very real-world measure of dangerous maneuvers in driving tests. Not yet clear is whether the training enhances cognitive capacity or simply hones a skill useful in some actual circumstances, but such a theoretical consideration is of little concern to anyone sitting behind the wheel in rush hour. “There are a slew of studies that say people who practice these games improve their game playing,” says psychologist Laura Carstensen, director of the Stanford Center on Longevity. “The real question is, Does this transfer outside of a lab into improved functioning?” In this case, it seems to do so.

Buyer beware
Despite the occasional glint of sunlight on the horizon, uncontested evidence that brain training results in far transfer of cognitive skills, whether measured by increased IQ or impact on real-life functioning, remains rare, and researchers still debate the significance of near transfer. Thus, industry claims of quick and easy boosts to intelligence, grades or even mental functioning are looking increasingly hollow.

Just as worrisome for companies and customers alike is the possibility that the way brain games target single cognitive abilities might be eliminating the variety that helps to make learning effective. For instance, in a 1978 study, Robert Kerr and Bernard Booth, both then at the University of Ottawa, found that children who were trained to throw beanbags at targets two and four feet away later performed with greater accuracy when throwing at targets three feet away—a distance they never practiced—than did kids who trained only at the three-foot distance, suggesting that learning to modulate relations was more important than specific experience.

So makers of brain games might be taking exactly the wrong approach. They break cognitive activity down into simple components and target them using highly repetitive procedures. This schema probably leads to faster improvement in the games, but it might also produce less transfer. When Lumosity promises on its Web site that “just 10–15 minutes of Lumosity training per day can lead to improvements in Lumosity over time,” the claim might be true, but it is also almost meaningless. Practicing any task will inevitably help you do it better without necessarily improving your performance at anything else. In other words, routing colored trains on a screen will not improve your lot in the world.

Yet the games that are most likely to be effective are not an easy sell, because they are so challenging. Dual n-back training, for instance, is a fiendishly tough and unpleasant experience—a big problem for the industry if engagement turns out to be key for achieving benefits.

People might be better off, in any case, engaging in pursuits known to have a payoff: naturally complex activities such as learning a language, taking up a musical instrument, or playing sports or even some video games. All engage multiple cognitive functions simultaneously and in constantly changing circumstances, a varied menu more likely to produce enhancements in abilities. Physical and social engagement have been repeatedly linked to healthier cognitive aging. So if the 30 minutes a day you spend training your brain means you're missing a walk with your dog, you're trading in known benefits for a gamble.

It's still early days for cognitive training. The problem, as ever, is that business has raced ahead of the science, with most companies paying little heed to the real state of the evidence when they put their marketing material together. Someday research could produce techniques that are at least modestly helpful for some people and in certain circumstances.

In the meantime, I'm going to lace up my sneakers and head out for a run.


Separate Brain-Training Fact From Fiction
Games that allegedly make you smarter may not live up to the hype

CLAIM: Brain games are designed by neuroscientists and scientifically tested

VALIDITY: True

Some brain-training outfits—Cogmed and Posit Science among them—were indeed founded by scientists, and they have conducted a slew of studies on their products, as have independent researchers. Most companies at least base their games on cognitive tasks devised by scientists and so can point to research into those tasks as evidence that their games work.

CLAIM: Brain training will improve your performance on cognitive tasks

VALIDITY: True but usually meaningless

The real surprise would be if training did not lead to higher scores for a given task. But such improvement does not indicate a boost in cognitive function. To be compelling, companies must show that the benefits transfer to tasks other than the one at the center of the training regime.

CLAIM: Brain training can treat adhd

VALIDITY: Possible

Researchers have reported positive results, primarily for Cogmed's working memory training, in kids and adults with ADHD or other problems with attention. But findings overall tend to be mixed, and many of the positive results have been contested. Problems include the use of subjective measures such as parent or teacher ratings and disagreements about appropriate control groups.

CLAIM: Brain games can prevent cognitive decline

VALIDITY: Maybe but probably not

Some studies suggest that certain kinds of training lead to better performance on tasks different from the focus of the regimen, regardless of the participant's age, but others have found that gains are not shared by older adults. Completely missing: compelling evidence of impact on real-life functioning.

CLAIM: Brain games can prevent, or delay progression of, Alzheimer's disease

VALIDITY: False

Most companies are actually pretty careful not to make explicit claims of efficacy in treating Alzheimer's, choosing instead to merely strongly imply that their products work to prevent or treat the condition. But studies show no real evidence that cognitive training can forestall Alzheimer's or dementia in general.

CLAIM: Brain training can make you safer on the road

VALIDITY: True

Eureka! Evidence is strong that increased performance on speed-of-processing tasks—which train people to process their full field of view as quickly as possible—leads to improvements in driving performance, including reduction in the number of dangerous maneuvers committed in actual driving tests.