By Janelle Weaver
Published animal trials overestimate by about 30 percent the likelihood that a treatment works because negative results often go unpublished, a study suggests.
This is a surprisingly strong bias, says the study's lead author, Malcolm Macleod, a neurologist at the Centre for Clinical Brain Sciences at the University of Edinburgh, UK. The work, published March 30 in PLoS Biology, analyses the effect of publication bias in animal models of disease.
Macleod and his collaborators turned to a stroke database called the Collaborative Approach to Meta Analysis and Review of Animal Data from Experimental Stroke (CAMARADES). An international team established the database in 2004 in response to poor translation of animal findings to clinical trials. Macleod's team combed through 525 studies, which encompassed 1,359 experiments testing a total of 16 different stroke treatments.
The team first estimated the magnitude of publication basis. A given treatment would be expected to result in a balanced range of effect sizes, but they found that the literature included many reports of a large effect, but few reports of a small effect. The team then calculated the number of missing studies and came up with an estimate of the 'true' effect of the treatment. In addition to the large overestimate of treatment's efficacy, Macleod and his team found that as many as 16 percent of experiments remain unpublished.
Lost in translation
A little more than a third of highly cited animal research is reproduced later in human trials, and although about 500 treatments have been reported as effective in animal models of stroke, only aspirin and early thrombolysis with tissue plasminogen activator work in humans. The lack of negative results in the literature may explain why so few drugs tested in animals are effective in humans.
Some say that these findings add to the evidence that animal models are not particularly useful in predicting whether a treatment is effective in humans. "What you really want in drug trials is an animal model that can predict human responses, and that just violates the rules of evolutionary biology," says Ray Greek, an anesthesiologist and president of Americans for Medical Advancement, a non-profit organization in Goleta, Calif., that opposes the use of animal models of disease.
But Macleod says that animal studies can help to pave the way to useful therapies. He advocates strategies for making animal studies more efficient and effective, such as randomizing treatment conditions or keeping experimenters blind to treatment assignments. Not reporting the negative results of animal trials is unethical, says Macleod, because it squanders animals and leads to premature human trials. "If the research is not published, it doesn't contribute to our knowledge of human disease," he says.
The prevalence of publication bias illustrates the tendency of journals to report positive results, which are often viewed as more interesting and citable than negative findings. "If a result is negative, the investigator doesn't want to go through the work of writing it up and publishing it, because they know it won't get into a good journal and it won't really enhance their career," says S. Tom Carmichael, a stroke researcher at the University of California, Los Angeles.
Macleod hopes that the study will convince scientists to publish all of their findings and encourage publishing groups to launch pre-print archives, like Nature Precedings, which include negative studies. Journals such as the Journal of Negative Results in BioMedicine explicitly focus on the problem. Neurobiology of Aging has a special section on negative results, and the Journal of Cerebral Blood Flow and Metabolism (JCBFM), part of the Nature Publishing Group, is launching a section that reports negative results from rigorously conducted studies. (Scientific American is part of Nature Publishing Group.) "I'm very positive that over the next few years, such measures will become standard for scholarly journals," says Ulrich Dirnagl, editor-in-chief of JCBFM. "I hope the study convinces professional societies and funding bodies to value negative data more and support its publication."
But journals alone can't solve the problem, Macleod says. The effect of publication bias on clinical research has driven the development of registries such as ClinicalTrials.gov in which clinical trials are logged before they begin. Macleod hopes that similar registries for animal studies will be introduced in future. "When you're trying to make up your mind whether it's worth taking a drug forward," he says. "It's important to get access to all the information about the drug, not just a subset of that information that was published in scientific journals."