Fudge Factor: A Look at a Harvard Science Fraud Case

Did Marc Hauser know what he was doing?

Join Our Community of Science Lovers!

As of this writing, the precise nature of Marc Haus­er’s transgressions remains murky. Haus­er is Harvard’s superstar primate psychologist—and, perhaps ironically, an expert on the evolution of morality—whom the university recently found guilty of eight counts of scientific misconduct. Harvard has kept mum about the details, but a former lab assistant alleged that when Hauser looked at videotapes of rhesus monkeys, in an experiment on their capacity to learn sound patterns, he noted behavior that other people in the lab couldn’t see, in a way that consistently favored his hypothesis. When confronted with these discrepancies, the assistant says, Hauser asserted imperiously that his interpretation was right and the others’ wrong.

Hauser has admitted to committing “significant mistakes.” In observing the reactions of my colleagues to Hauser’s shocking comeuppance, I have been surprised at how many assume reflexively that his misbehavior must have been deliberate. For example, University of Maryland physicist Robert L. Park wrote in a Web column that Hauser “fudged his experiments.” I don’t think we can be so sure. It’s entirely possible that Hauser was swayed by “confirmation bias”—the tendency to look for and perceive evidence consistent with our hypotheses and to deny, dismiss or distort evidence that is not.

The past few decades of research in cognitive, social and clinical psychology suggest that confirmation bias may be far more common than most of us realize. Even the best and the brightest scientists can be swayed by it, especially when they are deeply invested in their own hypotheses and the data are ambiguous. A baseball manager doesn’t argue with the umpire when the call is clear-cut—only when it is close.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Scholars in the behavioral sciences, including psychology and animal behavior, may be especially prone to bias. They often make close calls about data that are open to many interpretations. Last year, for instance, Belgian neurologist Steven Laureys insisted that a comatose man could communicate through a keyboard, even after controlled tests failed to find evidence. Climate researchers trying to surmise past temperature patterns by using proxy data are also engaged in a “particularly challenging exercise because the data are incredibly messy,” says David J. Hand, a statistician at Imperial College London.

Two factors make combating confirmation bias an uphill battle. For one, data show that eminent scientists tend to be more arrogant and confident than other scientists. As a consequence, they may be especially vulnerable to confirmation bias and to wrong-headed conclusions, unless they are perpetually vigilant. Second, the mounting pressure on scholars to conduct single-hypothesis-driven research programs supported by huge federal grants is a recipe for trouble. Many scientists are highly motivated to disregard or selectively reinterpret negative results that could doom their careers. Yet when members of the scientific community see themselves as invulnerable to error, they impede progress and damage the reputation of science in the public eye. The very edifice of science hinges on the willingness of investigators to entertain the possibility that they might be wrong.

The best antidote to fooling ourselves is adhering closely to scientific methods. Indeed, history teaches us that science is not a monolithic truth-gathering method but rather a motley assortment of tools designed to safeguard us against bias. In the behavioral sciences, such procedures as control groups, blinded designs and independent coding of data are essential methodological bulwarks against bias. They minimize the odds that our hypotheses will mislead us into seeing things that are not there and blind us from seeing things that are. As astronomer Carl Sagan and his wife and co-author Ann Druyan noted, science is like a little voice in our heads that says, “You might be mistaken. You’ve been wrong before.” Good scientists are not immune from confirmation bias. They are aware of it and avail themselves of procedural safeguards against its pernicious effects.

It’s Time to Stand Up for Science

If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.

I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.

If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.

In return, you get essential news, captivating podcasts, brilliant infographics, can't-miss newsletters, must-watch videos, challenging games, and the science world's best writing and reporting. You can even gift someone a subscription.

There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.

Thank you,

David M. Ewalt, Editor in Chief, Scientific American

Subscribe