When Bjorn Olsen of Harvard Medical School began work on the Journal of Negative Results in Biomedicine, some assumed the project was a gag. The peer-reviewed journal publishes serious research; it is just that its vision runs counter to traditional medical publishing, which tends to hide negative findings, such as a drug study that turns up adverse side effects but no measurable improvement. This publication bias is troubling because decisions based on a skewed sense of relative risks and benefits can be a matter of life or death. That was the problem in the Paxil case, in which the State of New York sued GlaxoSmithKline for suppressing data showing that the antidepressant increased teens' risk of suicide.
Now, thanks largely to the Paxil case, two recent moves are tackling the problem of publication bias. First, a group of leading journal editors announced in September a policy, effective July 2005, requiring all clinical trials to be registered from the get-go to be considered for publication in their journals. The editors expect this policy to reduce the bias toward favorable results, because researchers will have gone on the record before they know how the study will turn out. Then, in October, six Democratic lawmakers introduced House and Senate bills that would require drug companies to register clinical trials and report results in a public database (http://clinicaltrials.gov). Registration in this National Institutes of Health database is already mandatory for research on "serious and life-threatening" diseases, but lax enforcement has led up to half of all such trials to go unregistered.
Publication bias becomes dangerous when doctors prescribe drugs for uses not approved by the Food and Drug Administration, explains Catherine DeAngelis, editor of the Journal of the American Medical Association (JAMA). Although the FDA has rules against products that have not been proved safe and effective for treating a particular condition in a given population, physicians can and do prescribe them for other conditions. Such "off label" use usually happens because a sales rep distributes reprints of a published study showing the drug's effectiveness in a new use--a marketing practice the FDA currently allows. DeAngelis says the busy doctor might conclude, "Well, gee, if JAMA published it, this is good!"
On supporting science journalism
If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
That conclusion would be unfounded even for a well-done study, according to Steven Piantadosi, director of oncology biostatistics at Johns Hopkins University: "If it's one of many studies, you really need to see all of them." Publication bias also poses a problem for scientists conducting a meta-analysis of the literature on a given treatment, Piantadosi adds, and statisticians cannot correct for a bias in which the magnitude is unknown. That is why mandatory trial registration is a good idea, believes Kay Dickersin, a professor at Brown Medical School who has studied the problem: registration at least gives a "denominator" representing the total number of studies conducted on a treatment, even if registered studies remain unpublished.
The Pharmaceutical Research and Manufacturers of America (PhRMA) opposes mandatory registration and introduced its own answer--a voluntary results database for FDA-approved drugs. Alan Goldhammer, PhRMA's associate vice president for regulatory affairs, insists that "if you conduct a trial and the trial is neutral, it's very difficult to publish it in the peer-reviewed literature." But this statement doesn't stand up to the research on publication bias, according to Dickersin: "Investigators really like to blame the editors--'Oh, my papers wouldn't get accepted'--but when you actually look at the data, investigators aren't submitting their papers."
That is not surprising given the inherent conflict of interest between researchers' noble motives--aiding collaboration and honoring the pact with human subjects to make the findings public--and their personal agendas, which include furthering their careers, not aiding the competition and, in the case of industry research, protecting the source of their funding. In fact, Dickersin reports that publication bias is strongest for work funded by industry. And because of selective outcome reporting, even a "positive" study may mask some unreported negative results, she notes. Which is not to say that editors do not contribute to publication bias. JAMA editor DeAngelis admits that she and her colleagues "all vie with each other for the best and most exciting papers, and generally those aren't the negative studies." But, she adds, "We don't want to be part of the problem." As Olsen puts it: "Negative results can be very positive in their consequences."