Cognitive science and moral philosophy might seem like strange bedfellows, but in the past decade they have become partners. In a recent issue of Cognition, the Harvard University psychologist Joshua Greene and colleagues extend this trend. Their experiment utilizes conventional behavioral methods, but it was designed to test a hypothesis stemming from previous fMRI investigations into the neural bases of moral judgments (see here and here).
In their study Greene et al. give subjects difficult moral dilemmas in which one alternative leads to better consequences (such as more lives saved) but also violates an intuitive moral restriction (it requires a person to directly or intentionally cause harm to someone else). For example, in the “crying baby” dilemma subjects must judge whether it is wrong to smother their own baby in order to save a large group of people that includes the baby. In this scenario, which was also used by the television show M.A.S.H., enemy soldiers will hear the baby cry unless it is smothered. Sixty percent of people choose to smother the baby in order to save more lives. A judgment that it is appropriate to save the most lives, even if it requires you to suffocate a child, is labeled “utilitarian” by Greene et al., whereas a judgment that it is not appropriate is called “deontological.” These names pay homage to traditional moral philosophies.
Emotion vs. Rationality
Based on previous fMRI studies, Greene proposes a dual-process model of moral judgments. This model makes two central claims. First, when subjects form deontological judgments, emotional processes are said to override controlled cognitive processes. In other words, the subjects who are unwilling to smother the baby are being swayed by their emotions, and they can’t bear the idea of hurting a helpless child. This claim has been supported by a flurry of recent behavioral studies and neural studies. Greene’s dual-process model also claims that controlled cognitive processes cause utilitarian moral judgments. The new Cognition study puts that second claim to the test.
Neuroimaging reveals only correlations; it cannot determine whether a certain brain area is causing a particular judgment. But intervening in a process can provide evidence of causation. In the Cognition study, Greene et al. attempted to interfere with moral reasoning by increasing the cognitive load on subjects. They had subjects perform the moral judgment task at the same time as a monitoring task, in which subjects viewed a stream of numerals and responded to occurrences of “5.” If this added cognitive load interferes with the controlled cognitive processes that cause utilitarian judgments, the researchers surmised, then subjects should make fewer utilitarian judgments and should form these judgments more slowly. (For more on factors that influence judgment speed, see here.)
As hypothesized, added cognitive load led to longer reaction times for utilitarian judgments, but the researchers found no effect on reaction times for deontological judgments. Although it took subjects longer to approve of acts like smothering a baby when also looking for the number 5, it did not take them longer to approve of acts like not smothering the baby. This differential effect suggests that some of the cognitive processes involved in the monitoring task are also needed for the processes that lead to utilitarian judgments but not for those that lead to deontological judgments.
The cognitive load did not, however, decrease the proportion of utilitarian judgments, as the dual process model predicts. People were just as likely to approve of smothering the baby, even if it took them a little bit longer to make that judgment. This is puzzling, and suggests that the two processes do not compete. Greene et al. try to explain away this counterevidence by speculating that subjects “were determined to push through” the cognitive load, but this story makes sense only if subjects knew in advance that they wanted to reach a utilitarian judgment.
The dual process model also predicts that in the absence of cognitive load utilitarian judgments will still be slower than deontological judgments. Because utilitarian judgments (such as that it is appropriate to smother the baby) rely on controlled, deliberate processes, they should take longer than deontological judgments, which depend on emotions, instincts and other fast, automatic processes. Greene et al. found this difference for low-utilitarian participants (who made fewer utilitarian judgments) but not for high-utilitarian participants. To explain this anomaly, Greene et al. postulate “an additional process” that enables high-utilitarian participants to make utilitarian judgments quickly.
These gaps in the theory are, perhaps, not very serious in the absence of any alternative explanation of Greene et al.’s main finding that utilitarian judgments were slowed by cognitive load and deontological judgments were not. This effect might be due to the particular form of cognitive load, however. Greene et al. used numbers to create the cognitive load, but utilitarian judgments often hinge on numbers. Hence, confusion caused by the stream of numbers might lead subjects to recheck before forming utilitarian judgments, but not before forming deontological judgments that do not depend on getting the numbers right. Scientists might test this alternative explanation by checking whether the same differential effect arises when the cognitive load takes other forms, such as monitoring for letters, colors or faces.
Future studies should explore the distinctions that the current literature roughly characterizes as emotion versus cognition, and deontological judgments versus utilitarian judgments. Further clarification will come with a more precise specification of which functional processes constitute the controlled cognition that is supposed to cause utilitarian moral judgments. Clearly, more work needs to be done. But that is the sign of a useful experiment: it raises tractable questions that further research can illuminate. For the time being, this study takes an important step forward both by addressing a crucial issue for the dual-process model and also by presenting strong, though not conclusive, evidence for the role of controlled cognition in utilitarian moral judgment.
Mind Matters is edited by Jonah Lehrer, the science writer behind the blog The Frontal Cortex and the book Proust Was a Neuroscientist.