It is also important to remember that if a correlation is spurious, its spatial location in the brain should be random, but our correlational effects are not random. Studies of empathy for pain, fear of pain and the social pain of being rejected all show correlations between self-report measures and activity in the dorsal anterior cingulate cortex. This region is the same one that is surgically lesioned to treat intractable chronic pain—hardly random.
In your interview with Ed Vul, I see that he suggests that even if these effects aren’t entirely spurious, they may only account for a relatively small percentage of the variance and thus aren’t that scientifically interesting. First of all, that’s a major admission right there to go from arguing these are “spurious and invalid” to admitting they are “probably valid, but modest.”
Second of all, there are people who would otherwise be dead if we adopted Vul’s opinions regarding the importance of small effects. The biggest study to examine the effects of aspirin on heart attacks was stopped midway through the study because the experimenters looked at the data and realized it was unethical to prevent subjects in the placebo control from taking aspirin. Significantly more people had died from heart attacks in the placebo condition than the aspirin condition, and yet the experimental manipulation (aspirin versus placebo) in this study accounted for less than 1 percent of the variance in the outcomes.
So is there likely to be some inflation in the r-values obtained in whole-brain correlational analyses? Sure, but we’ve known this for a long time and most studies are interested in identifying where in the brain meaningful relations are occurring rather than estimating their exact magnitude. Are the reported correlations egregiously inflated? Based on the sample of studies that Vul et al. survey, probably not. Is an invalid method being used to test whether meaningful correlations are present and therefore worthy of the label “voodoo”? No way.
LEHRER: Do you think this controversy has had any positive benefits for the field, even though you strongly disagree with its findings?
LIEBERMAN: The answer is yes, but it’s worth taking a moment to discuss the potential harm as well. Despite the fact that Vul et al.’s novel claims (impossibly high correlations, invalid methods) are demonstrably false, these claims have the potential to bring great harm to the field. There are people at funding agencies and top journals wondering whether they should continue to support this kind of work. And this [effect] doesn’t just involve social neuroscience either, because anyone reading their paper can recognize that the issues Vul et al. raise, albeit incorrectly, apply equally to all areas of cognitive neuroscience. So even if people in the field recognize the limitations of the Vul et al. argument, it may be a challenge to regain the trust of those we count on to support our work. It’s a well-known social psychological fact that when someone is cleared of a crime, the lasting association is between the person and the crime, rather than the fact that they were cleared.
On to the good news. I think this is getting lots of people to think more carefully about many different kinds of analyses. For instance, many of the “independent” correlations that Vul et al. approve of have a source of bias (restriction of range) that causes them to under-estimate the true correlation value. There’s a statistical correction for this and we’ve included it in our reply. Additionally, the results of the simulation we ran in our reply was illuminating for us. Based on how we and other social neuroscientists typically analyze data, this simulation suggests that we should really be aiming for samples of at least 18 subjects because at this size, there was a dramatic drop in the number of false positives (for example, finding a correlation of r=0.80 when no true correlation existed). Of course, we would always like to run larger samples but the expense of imaging is tremendous.