The Science of Getting It Wrong: How to Deal with False Research Findings

The key may be for researchers to work closer and check one another's results

Talk about making waves. Two years ago medical researcher John Ioannidis of the University of Ioannina in Greece offered mathematical "proof" that most published research results are wrong. Now, statisticians using similar methods found—not surprisingly—that the more researchers reproduce a finding, the better chance it has of being true.

Another research team says researchers have to draw conclusions from imperfect information, but offers a way to draw the line between justified and unjustified risks.

Meantime, in a possible sign of change, some genetics researchers have begun working more closely in an effort to prevent errors and enhance the accuracy of their results.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


In his widely read 2005 PLoS Medicine paper, Ioannidis, a clinical and molecular epidemiologist, attempted to explain why medical researchers must frequently repeal past claims. In the past few years alone, researchers have had to backtrack on the health benefits of low-fat, high-fiber diets and the value and safety of hormone replacement therapy as well as the arthritis drug Vioxx, which was pulled from the market after being found to cause heart attacks and strokes in high-risk patients.

Using simple statistics, without data about published research, Ioannidis argued that the results of large, randomized clinical trials—the gold standard of human research—were likely to be wrong 15 percent of the time and smaller, less rigorous studies are likely to fare even worse.

Among the most likely reasons for mistakes, he says: a lack of coordination by researchers and biases such as tending to only publish results that mesh with what they expected or hoped to find. Interestingly, Ioannidis predicted that more researchers in the field are not necessarily better—especially if they are overly competitive and furtive, like the fractured U.S. intelligence community, which failed to share information that might have prevented the September 11, 2001, terrorist strikes on the World Trade Center and the Pentagon.

But Ioannidis left out one twist: The odds that a finding is correct increase every time new research replicates the same result, according to a study published in the current PLoS Medicine. Lead study author Ramal Moonesinghe, a statistician at the Centers for Disease Control and Prevention, says that for simplicity's sake his group ignored the possibility that results can be replicated by repeating the same biases. The presence of bias reduces but does not erase the value of replication, he says.

"I fully agree that replication is key for improving credibility & replication is more important than discovery," Ioannidis says. But he adds that biases also have to be weeded out, otherwise replication may not be enough. For example, researchers reported in a much touted 2006 Science article that they had discovered a gene variant that seemed to confer a risk for obesity, and they replicated the results in four human populations. Last month, they acknowledged that the finding was probably wrong.

Ioannidis says that researchers have become increasingly sophisticated at acquiring large amounts of data from genomics and other studies, and at spinning it in different ways—much like TV weathercasters proclaiming every day a record-setting meteorological event of some sort. As a result, he says, it is easy to come up with findings that are "significant" in the statistical sense, yet not scientifically valid.

To deal with this poverty of riches, Ioannidis proposes that researchers cooperate more to confirm one another's findings Toward that end, he and other genetics researchers two years ago established a network of research consortia now consisting of 26 groups, he says, each with a dozen to hundreds of members, for investigators studying various cancers, HIV, Parkinson's disease and other disorders. The groups are intended to help teams in each field replicate one another's work.

Networks or not, doctors and health officials also have to decide how to treat patients based on published research that could be overturned, notes oncologist Benjamin Djulbegovic of the H. Lee Moffitt Cancer Center and Research Institute in Tampa. He and his colleagues contend in a second PLoS paper that physicians' decisions should be based on a mix of estimates of error for different types of studies (such as those that Ioannidis calculated), the potential benefits of the treatments reported in those studies, and how much of those benefits their patients can do without (or how much harm they can live with) if the finding turns out to be false.

"We can't work with 100 percent certainty," Djulbegovic says. "The question is: How false is false?" A well conducted randomized trial is more likely to produce correct results, but a less rigorous study might still satisfy a physician if the risks are low and its potential benefits are great, he says.

Ioannidis agrees that perfect certainty is impossible. "If you have a severe disease and there is only one medication available, and you know that it is only 5 percent likely to work, why not use it?" he says. But implementing such a calculus is trickier than it appears, he adds, because "we cannot assume that an intervention is necessarily safe in the absence of strong data testifying to this."

It’s Time to Stand Up for Science

If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.

I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.

If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.

In return, you get essential news, captivating podcasts, brilliant infographics, can't-miss newsletters, must-watch videos, challenging games, and the science world's best writing and reporting. You can even gift someone a subscription.

There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.

Thank you,

David M. Ewalt, Editor in Chief, Scientific American

Subscribe