There’s a little too much wishful thinking about mindfulness, and it is skewing how researchers report their studies of the technique.
Researchers at McGill University in Montreal, Canada, analysed 124 published trials of mindfulness as a mental-health treatment, and found that scientists reported positive findings 60% more often than is statistically likely. The team also examined another 21 trials that were registered with databases such as ClinicalTrials.gov; of these, 62% were unpublished 30 months after they finished. The findings—reported in PLoS ONE on April 8— hint that negative results are going unpublished.
Mindfulness is the practice of being aware of thoughts and feelings without judging them good or bad. Mental-health treatments that focus on this method include mindfulness-based stress reduction—an 8-week group-based programme that includes yoga and daily meditation—and mindfulness-based cognitive therapy.
A bias toward publishing studies that find the technique to be effective withholds important information from mental-health clinicians and patients, says Christopher Ferguson, a psychologist at Stetson University in Florida, who was not involved in the study. “I think this is a very important finding,” he adds. “We’ll invest a lot of social and financial capital in these issues, and a lot of that can be misplaced unless we have good data.”
For the 124 trials, the researchers calculated the probability that a trial with that sample size could detect the result reported. Experiments with smaller sample sizes are more affected by chance and thus worse at detecting statistically significant positive results. The scientists’ calculations suggested that 66 of 124 trials would have positive results. Instead, 108 trials had positive results. And none of the 21 registered trials adequately specified which of the variables they tracked would be the main one used to evaluate success.
This doesn’t necessarily suggest that none of the mindfulness treatments work, says study co-author Brett Thombs, a psychologist at McGill and at the Jewish General Hospital in Montreal. “I have no doubt that mindfulness helps a lot of people,” he says. “I’m not against mindfulness. I think that we need to have honestly and completely reported evidence to figure out for whom it works and how much.”
Trials with larger sample sizes—and thus more statistical power—would be an improvement. In the McGill team's analysis, the 30 trials with the most statistical power showed no over-reporting of positive results.
The bias towards reporting positive results is pervasive across many types of mental health, psychology and medical research, says Ferguson. For example, the widely popularized theory of ego depletion—that people have limited self-control for decisions—recently failed to hold up in a large replication trial. “A lot of these things are reported to be true, they’re in a TEDx talk,” he says. “Now we're seeing, when we look at things much more closely, we’ve kind of been bullshitting people [for] a decade.”
He advocates pre-registering studies, in which a journal reviews and accepts a study—including the outcomes that it will measure—before data collection begins. This way, the journal publishes the trial results regardless of whether they are negative or positive.
Without this kind of agreement, journals are more likely to publish only positive results, and scientists need published papers to get funding and tenure. This creates a perverse incentive that does not make sense from a care standpoint. “For the health-care system,” says Thombs, “it’s just as important to know what doesn’t work.”
This article is reproduced with permission and was first published on April 21, 2016.