Mo Costandi of Nature magazine

Research showing that action video games have a beneficial effect on cognitive function is seriously flawed, according to a review published this week in Frontiers in Psychology.

Numerous studies published over the past decade have found that training on fast-paced video games such as Medal of Honor and Grand Theft Auto that require a wide focus and quick responses has broad 'transfer effects' that enhance other cognitive functions, such as visual attention. Some of the studies have been highly cited and widely publicized: one, by cognitive scientists Daphne Bavelier and Shawn Green of the University of Rochester in New York, published in Nature in 20032, has been cited more than 650 times, and was widely reported by the media as showing that video games boost visual skills.

But, say the authors of the review, that paper and the vast majority of other such studies contain basic methodological flaws and do not meet the gold standard of a properly conducted clinical trial.

"Our main focus was recent work specifically examining the effects of modern action games on college-aged participants," says Walter Boot, a psychologist at Florida State University in Tallahassee, and lead author of the review. "To our knowledge, we've captured all of these papers in our review, and all of the literature suffers from the limitations we discuss."

Design defects

Most of the studies compare the cognitive performances of expert gamers with those of non-gamers, and suffer from well-known pitfalls of experimental design. The studies are not blinded: participants know that they have been recruited because they have gaming expertise, which can influence their performance, because they are motivated to do well and prove themselves. And the researchers know which participants are in which group, so they can have preconceptions that might inadvertently affect participants' performance.

A more rigorous methodology is used in training studies, such as those conducted by Green and Bavelier, in which non-gamers are randomly assigned to one of two groups. One group is trained on an action video game, and the other on a different type of game, such as the slower-paced block-rearrangement task Tetris. Their performance on a cognitive task is measured before and after game training.

But these studies, too, have shortcomings. The researchers usually assume that the placebo effects, wherein subjects improve because they expect to improve, will be comparable between the two groups. In fact, each group of participants might predict that their particular training will lead to improved performance in different types of tasks, causing a differential in the placebo effect.

The studies' results could also be confounded if one of the games more closely resembles the cognitive task being measured than does the other--a factor that is rarely taken into account by researchers.

Furthermore, many studies split their reports of different beneficial outcomes from the same groups of participants over multiple papers, making it unclear how many times the results have been independently replicated.

Neither Green nor Bavelier, who between them worked on ten of the papers analyzed in the review, were available for comment.

Game over?

Boot and his colleagues say that none of the studies they examined avoided all of the methodological pitfalls, and that this raises doubts about the cumulative evidence that action video games enhance cognition. Boot stresses that the studies' claims are not necessarily wrong--but although the available evidence is promising, it is not compelling enough for researchers to draw strong conclusions legitimately.

The team suggests that all future studies into the effects of gaming should follow the basic principles of good experimental design. Researchers and participants should have no knowledge of who is assigned to which group. Samples should be more representative, including equal numbers of men and women. And recruiting strategies and experimental design should be stated explicitly and fully in the resulting paper.

Jay Pratt, a psychologist at the University of Toronto in Canada, who was involved in some of the studies analyzed in the review, agrees with some of the criticism. "Large comprehensive training studies with a balance of men and women are a step in the right direction," he says. "Expanding the scope of that style of experimenting could prove useful."

This article is reproduced with permission from the magazine Nature. The article was first published on September 16, 2011.