In 1961 Stanley Milgram embarked on a research program that would change psychology forever. Fueled by a desire to understand how ordinary Germans had managed to participate in the horrors of the Holocaust, Milgram decided to investigate when and why people obey authority. To do so, he developed an ingenious experimental paradigm that revealed the surprising degree to which ordinary individuals are willing to inflict pain on others.
Half a century later Milgram’s obedience studies still resonate. They showed that it does not take a disturbed personality to harm others. Healthy, well-adjusted people are willing to administer lethal electric shocks to another person when told to do so by an authority figure. Milgram’s findings convulsed the world of psychology and horrified the world at large. His work also left pressing questions about the nature of conformity unanswered. Ethical concerns have prompted psychologists to spend decades struggling to design equally powerful experiments without inflicting distress on the participants.
Researchers have now begun developing tools that allow them to probe deeper into his experimental setup. This work is pointing the way to new understandings of when and why people obey—and of the atrocities conformity can enable.
Obedience to Authority
When he began this project, Milgram had another goal in mind. He intended to assess whether some nationalities are more willing than others to conform to the wishes of an authority figure. His plan was to start studying obedience in the U.S. and then to travel to Europe to look for differences in behavior among populations there.
The topic of conformity was not new, and indeed Milgram had been heavily influenced by psychologist Solomon Asch, with whom he had studied in 1959 at the Institute for Advanced Study in Princeton, N.J. Asch had shown that when asked to make public judgments about the length of a line, people were often willing to bend to the views of their peers even when doing so meant defying the evidence of their own eyes.
Milgram suspected that Asch’s results held hidden potential that might be revealed if he studied behaviors of greater social significance than simply judging lines. So Milgram designed an experiment in which participants—most of whom were men living near Yale University’s psychology department, where the study was conducted—were told to act as a “teacher” assisting an experimenter in a study of memory. Their task was to administer a memory test to a learner, who in reality was an actor employed by Milgram. When this learner supplied an incorrect answer, the participant was to give him an electric shock. The ostensible goal was to investigate the impact of punishment on learning: Would the shocks improve the learners’ performance or not?
To administer the shocks, the teacher had in front of him a shock generator with 30 switches on its front panel. The buttons were arranged in ascending order from 15 volts, labeled with the words “slight shock,” all the way up to 450 volts, ominously labeled “XXX.” After each error the teacher had to depress the next switch to the right, increasing the jolt by 15 volts. Milgram was interested in seeing how far they would go. Would they administer a “strong shock” of 135 volts? What about an “intense shock” of 225 volts? Perhaps they would instead stop at 375 volts: “danger: severe shock.” Surely, Milgram thought, very few subjects would go all the way—although people from some countries might go further than residents of other nations. In particular, he posited that Germans might be willing to deliver bigger shocks than Americans typically would.
Milgram was taken aback by what he found next. His initial pilot studies with Yale students showed that people regularly followed the experimenter’s instructions. Indeed, the vast majority continued pressing switches all the way to the highest voltage—well beyond the point at which the shocks would prove lethal.
Of course, the shock generator was not real, so the learners never really suffered. But the participants did not know this, so by all appearances Milgram’s subjects seemed willing to deliver shocks sufficient to kill a person simply because they were asked to do so by a gray-coated lab assistant in a science experiment.
Startled by these findings, at first Milgram dismissed the results as a reflection of the particular nature of “Yalies.” Only when he reran the studies with members of the broader American public did he begin to realize he was onto something big. In what became known as the baseline, or voice feedback, condition, the teacher sits in the same room as the experimenter. The learner is in another room, and communication occurs only over an intercom. As the shock levels increase, the learner expresses pain and demands to be released from the study. At 150 volts he cries out, “Experimenter, get me out of here! I won’t be in the experiment any more! I refuse to go on!” Despite these pleas, 26 of the 40 participants, or 65 percent, continued administering shocks to the maximum, 450-volt level.
This discovery completely transformed Milgram’s career. He abandoned his plans to run the study in Europe—if Americans were already so highly obedient, clearly Germans could not conform much more. Instead he concentrated on examining exactly what about his experiment had led ordinary Americans to behave so unexpectedly. As Milgram put it, he was determined to worry this phenomenon to death.
Science of Defiance
Popular accounts of Milgram’s work most often mention only the baseline study, with its 65 percent compliance. In fact, he conducted a very large number of studies. In his book from 1974, Obedience to Authority, Milgram describes 18 variants. He also conducted many studies to develop the paradigm that were never published. In one pilot experiment the learner provided no feedback to the participants—and almost every teacher went all the way to 450 volts. Another variant, in which participants helped in the study but did not actually depress the lever to deliver the shock, produced similar results.
When the subjects sat in the same room as the learner and watched as he was shocked, however, the percentage of obedient teachers went down to 40. It fell further when the participant had to press the learner’s hand onto an electric plate to deliver the shock. And it went below 20 percent when two other “participants”—actually actors—refused to comply. Moreover, in three conditions nobody went up to 450 volts: when the learner demanded that shocks be delivered, when the authority was the victim of shocks, or when two authorities argued and gave conflicting instructions.
In short, Milgram’s range of experiments revealed that seemingly small details could trigger a complete reversal of behavior—in other words, these studies are about both obedience and disobedience. Instead of only asking why people obey, we need to ask when they obey and also when they do not.
In his various papers describing the studies, Milgram provides a rich and diverse set of explanations for his findings. He describes how the participants are presented with the experiments’ worthy purpose to advance understanding, a goal the participants respect. He notes how a subject is often torn between the demands of the experimenter and the victim, with the one urging him to go on and the other pleading him to stop. He also expressed interest in the way other factors, such as the physical distance between the parties involved, might influence whom the participant listens to.
In the public eye, however, one theory has come to dominate: the idea that participants in the experiment enter into what Milgram terms an “agentic state” in which they cede authority to the person in charge. He developed this idea partly from Hannah Arendt’s famous analysis of Adolf Eichmann, a perpetrator of the Nazi Holocaust. As Milgram put it, “the ordinary person who shocked the victim did so out of a sense of obligation—a conception of his duties as a subject—and not from any peculiarly aggressive tendencies.” In the face of authority, humans focus narrowly on doing as they are told and forget about the consequences of their actions. Their concern is to be a good follower, not a good person.
Milgram was a brilliant experimentalist, but many psychologists are profoundly skeptical of the idea of the agentic state. For one thing, the hypothesis cannot explain why the levels of conformity varied so greatly across different versions of the study. More broadly, this analysis focuses only on participants’ obligations to the experimenter, although at several points in the studies they were also attuned to the fate of the learner.
When you examine the grainy footage of the experiments, you can see that the participants agonize visibly over how to behave. As Milgram recognized early on, the dilemma comes from their recognition of their duties to both the experimenter and the learner. They argue with the experimenter. They reflect the learner’s concerns back to him. They search for reassurance and justification.
In fact, in designing the studies, Milgram anticipated this process. To make it somewhat more controlled, he devised four verbal prods, which the experimenter would use if the participant expressed doubts. A simple “please continue” was followed by “the experiment requires that you continue” and then “it is absolutely essential that you continue.” The most extreme prompt was “you have no other choice, you must go on.”
As psychologist Jerry Burger of Santa Clara University has observed, of these four instructions only the last is a direct order. In Obedience, Milgram gives an example of one reaction to this prod:
Experimenter: You have no other choice, sir, you must go on.
Subject: If this were Russia maybe, but not in America.
(The experiment is terminated.)
In a recent partial replication of Milgram’s study, Burger found that every time this prompt was used, his subjects refused to go on. This point is critically important because it tells us that individuals are not narrowly focused on being good followers. Instead they are more focused on doing the right thing.
The irony here is hard to miss. Milgram’s findings are often portrayed as showing that human beings mindlessly carry out even the most extreme orders. What the shock experiments actually show is that we stop following when we start getting ordered around. In short, whatever it is that people do when they carry out the experimenter’s bidding, they are not simply obeying orders.
Morality and Leadership
The fact that we could so easily be led to act in such extreme ways makes it all the more important to explore when and why this happens. But at the same time, it raises acute ethical issues that in fact render the necessary research unacceptable. As much as we wish to help society understand human atrocity, and thus prevent it, we also must not distress the participants in our studies who afterward will have to confront their own actions.
For a long time, researchers conducted secondary analyses of Milgram’s data, studied historical events, and designed experiments with less extreme behaviors, such as having subjects be negative about job applicants or squash bugs. No matter how clever the design, none of these studies investigated how humans can inflict extreme harm on one another as directly as Milgram’s did, nor did they have the same impact or social relevance.
Recently this stalemate has begun to shake loose. Mel Slater, a computer scientist at University College London, has developed a virtual-reality simulation of the obedience paradigm. He has shown that people behave much the same way in this environment as they do in real contexts, and he has suggested that his simulation can serve as a new venue for carrying out obedience experiments. Moreover, Burger has argued persuasively that those who obey the experimenter’s instructions at 150 volts are most likely to carry on obeying right up to XXX. By stopping the trials at this level, then, we can address the same issues that Milgram did without actually asking people to inflict extreme harm on others—and having those individuals suffer later from the knowledge that they are willing to do so.
The key issue remains: how to define the circumstances that enable people to inflict pain on others. Milgram himself suggested that group formation and identification might play a role in determining whether we side more with authority or its victims. Other studies closely related to Milgram’s have flagged these same processes–notably Philip Zimbardo’s prison experiment at Stanford University in 1971 [see “The Psychology of Tyranny,” by S. Alexander Haslam and Stephen D. Reicher; Scientific American Mind, October 2005]. Evidence suggests that we enact an authority figure’s wishes only when we identify with that person and his or her goals. In essence, obedience is a consequence of effective leadership. Followers do not lose their moral compass so much as choose particular authorities to guide them through the ethical dilemmas of everyday life. Obedient people are not mindless zombies after all.
This radical reinterpretation of Milgram’s studies clearly requires more data to support it, as well as further debate. Sadly, the need for this debate is no less pressing today than it was in 1961. With the recent government-led massacres in Libya and Syria and the shadows of Abu Ghraib and Guantnamo Bay hanging over us, we need more than ever to understand how people can be led to harm others—and how we can stop them.
Experimenting with Ethics
In a biography of Milgram, psychologist Thomas Blass of the University of Maryland described the furor that ensued after the New York Times ran an article on Milgram’s studies in 1963. An editorial in the St. Louis Post-Dispatch described the studies as “open-eyed torture.” The famous psychoanalyst Bruno Bettelheim called Milgram’s work “vile” and “in line with the human experiments of the Nazis.” He was even attacked in The Dogs of Pavlov, a 1973 play by Welsh poet Dannie Abse. One character, Kurt, describes the setup of the obedience studies as “bullshit,” “fraudulent” and a “cheat.”
Milgram responded robustly, claiming that “no one who took part in the obedience study suffered damage, and most subjects found the experience to be instructive and enriching.” The data he collected from a questionnaire completed after each experiment are nuanced, however. Of the 656 participants in the studies, 84 percent said they were glad to have taken part, 15 percent were neutral, and a mere 1 percent were sorry. More than half admitted to some level of discomfort during the studies, but only about one third admitted to having felt troubled by them since—in this latter group, only 7 percent agreed that they had been “bothered by it quite a bit.” Although Milgram was probably right in saying that most people were fine, it is equally probable that a minority suffered to some degree.
Still, the fact that Milgram collected these data demonstrates that he was attuned to the ethical issues and aware of their importance.