Matthew Lieberman is associate professor of social neuroscience at the University of California, Los Angeles. In recent weeks, he’s also rebutted the claims of a recent paper, “Voodoo Correlations in Social Neuroscience,” which explored the high correlations between measures of personality or emotionality in the individual—such as the experience of fear, or the willingness to trust another person—with the activity of certain brain areas as observed in an fMRI machine. Mind Matters editor Jonah Lehrer chats with Lieberman about why most fMRI correlations aren’t false, the “reward” of intense grief and why accepting unfair offers seems to activate brain areas involved with self-control.
LEHRER: Your field of research has come under fire in a recent paper titled "Voodoo Correlations in Social Neuroscience." What's the authors' argument and have they identified a significant problem in this field?
LIEBERMAN: In their paper, Vul and colleagues suggest that brain-personality correlations in many social neuroscience studies depend on invalid methods and thus are “implausibly high,” “likely…spurious” and “should not be believed.” These claims are incorrect. These analyses use standard procedures for drawing inferences and protecting against false positives. The correlation estimates will tend to be somewhat higher than the true value, but there is no evidence to suggest that these correlations are meaningless or “voodoo” science.
The argument that Vul and colleagues put forward in their paper is that correlations observed in social neuroscience papers are impossibly high. There’s a metric (the product of the reliabilities of the two variables) that determines just how high of a correlation can be observed between two variables. They suggest that because, on average, this metric allows correlations as high as 0.74, that social neuroscientists should never see correlations higher than that.
Given the gravity of the claim, it’s important to get this [figure] right, but they do not. Here’s their mistake: it’s not the average of this metric that determines what can be observed in a study, but rather the metric for that particular study or at the very least, the metric estimated from prior use of the actual measures in that study. Just because the average price of groceries in a supermarket is $3 does not mean you cannot find a $12 item. In fact, a study that I’m an author on (and is a major target in the Vul et al. paper) is a perfect example. The reliability of the self-report measure in our study is far higher than the average they report allowing for higher observed correlations. They knew this [fact], but presented our study as violating the “theoretical upper bound” anyway.
Their second major conceptual point is that numerous social neuroscience authors were making a non-independence error. Ed Vul gives a nice example of what he means by the non-independence error in a chapter with [Massachusetts Institute of Technology neuroscientist] Nancy Kanwisher. They suggest that we might be interested in whether a psychology or a sociology course is harder and assess this [question] by comparing the grades of students who took both courses. In a comparison of all students, we find no difference in scores. But what if we began by selecting only students who scored higher in psychology than sociology and then statistically compared those? If we used the results of that analysis to draw a general inference about the two courses, this [strategy] would be a non-independence error, because the selection of the sample to test is not independent of the criterion being tested. This [practice] would massively bias the results.