Social Sciences Suffer from Severe Publication Bias

"Null" results rarely see the light of day, a survey finds

Join Our Community of Science Lovers!

When an experiment fails to produce an interesting effect, researchers often shelve the data and move on to another problem. But withholding null results skews the literature in a field, and is a particular worry for clinical medicine and the social sciences.

Researchers at Stanford University in California have now measured the extent of the problem, finding that most null results in a sample of social-science studies were never published. This publication bias may cause others to waste time repeating the work, or conceal failed attempts to replicate published research. Although already recognized as a problem, “it’s previously been hard to prove because unpublished results are hard to find”, says Stanford political scientist Neil Malhotra, who led the study.

His team investigated the fate of 221 sociological studies conducted between 2002 and 2012, which were recorded by Time-sharing Experiments for the Social Sciences (TESS), a US project that helps social scientists to carry out large-scale surveys of people's views.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Only 48% of the completed studies had been published. So the team contacted the remaining authors to find out whether they had written up their results, or submitted them to a journal or conference. They also asked whether the results supported the researchers’ original hypothesis.

Of all the null studies, just 20% had appeared in a journal, and 65% had not even been written up. By contrast, roughly 60% of studies with strong results had been published. Many of the researchers contacted by Malhotra’s team said that they had not written up their null results because they thought that journals would not publish them, or that the findings were neither interesting nor important enough to warrant any further effort.

“When I present this work, people say, ‘These findings are obvious; all you've done is quantify what we knew anecdotally’,” says Malhotra. But social scientists often underestimate the magnitude of the bias, or blame journal editors and peer reviewers for rejecting null studies, he says. His team's findings are published today in Science.

Poisoned by success
The problem may be bigger than the TESS sample suggests. Each survey design proposed to TESS is peer-reviewed, to ensure that it has sufficient statistical power to test an interesting hypothesis; weaker studies in these fields would probably have an even lower rate of publication. “It’s very likely that this study underestimates the true extent of the problem,” says Daniele Fanelli, an evolutionary biologist who studies publication bias and misconduct, and is currently a visiting professor at the University of Montreal in Canada.

In 2010, Fanelli surveyed the publication bias across a range of disciplines, and found that psychology and psychiatry had the greatest tendency to publish positive results. “But it’s not just a social-science issue — it’s also common in the biomedical sciences,” says Hal Pashler, a psychologist at the University of California, San Diego, in La Jolla. “Both are really poisoned by only hearing about the successes.” (See '‘Ethical failure’ leaves one-quarter of all clinical trials unpublished'.)

Social scientists are already trying to tackle publication bias (see ‘Replication studies: Bad copy’). Malhotra is involved in the Berkeley Initiative for Transparency in the Social Sciences, which advocates a range of strategies to strengthen social-science research. One option is to log all social-science studies in a registry that tracks their outcome — a model that is already used to help ensure that null results from drug trials see the light of day. Meanwhile, Pashler has set up a website, PsychFileDrawer, to capture null results generated by attempts to replicate findings in experimental psychology.

These remedies have not been universally welcomed, however. “There’s been a lot of pushback,” says Malhotra. Some social scientists are worried that sticking to a registered-study plan might prevent them from making serendipitous discoveries from unexpected correlations in the data, for example. But most accept the need for change, adds Pashler: “We’re all waking up to this.”

This article is reproduced with permission and was first published on August 28, 2014.

Mark Peplow is a journalist based in Penrith, England.

More by Mark Peplow

First published in 1869, Nature is the world's leading multidisciplinary science journal. Nature publishes the finest peer-reviewed research that drives ground-breaking discovery, and is read by thought-leaders and decision-makers around the world.

More by Nature magazine

It’s Time to Stand Up for Science

If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.

I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.

If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.

In return, you get essential news, captivating podcasts, brilliant infographics, can't-miss newsletters, must-watch videos, challenging games, and the science world's best writing and reporting. You can even gift someone a subscription.

There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.

Thank you,

David M. Ewalt, Editor in Chief, Scientific American

Subscribe