Imagine that the U.S. is preparing for an outbreak of an unusual Asian disease that is expected to kill 600 people. Government officials have proposed two alternative programs to combat the disease. Under program A, 200 people will be saved. Under B, there is a one-third probability that 600 people will be saved and a two-thirds probability that nobody will. Confronted by this choice, 72 percent of people choose A, preferring to save 200 people for certain rather than risking saving no one.

Now imagine that officials present these two options instead: under program C, 400 people will die; under program D, there is a one-third probability that nobody will die and a two-thirds probability that all 600 people will perish. Faced with this pair of scenarios, 78 percent of people choose D, according to results of a classic study by Nobel laureate Daniel Kahneman, a psychologist at Princeton University, and his longtime collaborator, psychologist Amos Tversky.

Of course, these two pairs of options--A or B and C or D--are identical: saving 200 lives means that 400 people will die, and in both B and D, taking a one-third chance to save everyone means taking a two-thirds chance to lose everyone. Whichever choice you make, logic would seem to dictate that it should be the same no matter how the options are worded. So why do people tend to prefer A to B but the reverse when the options are described as in C and D?

Kahneman and Tversky's research provides a clue: people respond to choices involving losses, such as deaths, differently from those relating to gains, such as survivors. When choosing between positive outcomes, people tend to be risk averse and want the sure thing (saving 200 people) but are far more willing to take risks when weighing losses--a psychological tendency that can be exploited by the deliberate wording of options. Some 30 years ago Kahneman and Tversky's initial findings in this field launched a concerted inquiry into how the framing of options affects people's decisions. Since then, they and many others have discovered various ways in which language can have a profound--and often counterintuitive--effect on the choices people make.

In addition to the loss-versus-gain effect, recent research shows, for example, that people can be moved en masse to opt for one alternative when it is positioned as the default--an unstated option that people get if they do not make a selection. But pick a different default and a crowd moves the other way, as if magnetically motivated to follow the unmarked road. People's decisions can be subtly influenced by context as well. Pitting one selection against a costlier or more frivolous alternative can make that choice seem more attractive than if it had been matched against a more favorable option.

We all seem rather fickle. Indeed, studies on the psychology of choice somewhat radically imply that we do not strictly possess preferences and values; instead we construct them in response to the questions the world asks us or the choices it presents us. The apparent capriciousness of our opinions often appears irrational, but in some cases there is a funny logic to it: descriptions may influence not only what we choose but also how we enjoy or appreciate that choice--a circular way of making that option the "right" one for us [see box on page 42].

Understanding how words steer our decisions regarding gains and losses can help guide the phrasing of public service announcements to best motivate people to, say, conserve energy or take care of their health. In other situations, officials might employ the power of defaults to lead people toward options they are likely to prefer, even if they tend not to choose them out of laziness, hurriedness or misunderstanding. And finally, an awareness of contextual wording traps may enable all of us to reconsider our reactions toward surveys, political campaigns and clever advertisements, recognizing that almost every question inexorably biases respondents toward one choice over another.

Gains and Losses
In their landmark research, Kahneman and Tversky pioneered the notion that two ways of describing a choice that are logically equivalent, as in the example above, are not necessarily psychologically equivalent. In something they called "prospect theory," Kahneman and Tversky deciphered the relation between the objective and the subjective as it relates to losses and gains.

Although people do become more satisfied as an outcome gets increasingly favorable, a person's happiness does not increase in linear fashion in relation to the gain, according to prospect theory [see box above]. Instead a person's subjective state improves at an increasingly slower rate until an objective improvement in a situation hardly changes a person's satisfaction at all--something economists call "diminishing marginal utility." This means, for example, that saving 600 lives will not feel three times as good as saving 200 lives--so taking a risk to save all 600 people feels like a bad psychological bet. Kahneman and Tversky argued that most people are risk averse when contemplating gains.

When it comes to negative occurrences, such as deaths, changes in a person's emotional state similarly diminish as the situation worsens rather than continuing to worsen at a rate proportional to the circumstances. Thus, losing 600 lives will not hurt three times as much as losing 200 would, so taking the risk to lose no one feels like a good psychological bet. This principle causes people to seek risk when it comes to losses.

And whether people are attending to gains or losses depends on how the options are framed. In the A-versus-B choice people are considering gains, whereas they are pondering losses when faced with the C-versus-D scenario, explaining why people are not willing to take a risk in the first situation but are in the second.

Prospect theory also holds that people actually feel worse about a loss of a given amount than they would feel good about a gain of a similar magnitude. That means getting people to focus on avoiding losses when they make decisions will be more motivational than getting them to focus on securing gains. This fact can be exploited. Appeals to women to do breast self-exams that emphasize the benefits of early cancer detection (gains) are less effective than those that emphasize the costs of late detection (losses). Pleas to homeowners to conserve energy that focus on savings (gains) in utility bills are less powerful than efforts that focus on the added costs of using energy profligately (losses).

The Power of Silence
Another powerful way to influence choice is to leave something unsaid. In the U.S. and many European countries, people who renew their driver's license are asked if they want to be an organ donor. As decision scientists Eric J. Johnson of Columbia University and Daniel Goldstein, now at London Business School, reported in 2003, more than 90 percent of the people in many European countries are organ donors, whereas only about 25 percent of Americans are--despite the fact that most Americans approve of organ donation. Why? In the U.S., to be an organ donor you have to sign a form. If you do not sign the form, you are not an organ donor. The latter is the default option, and that is the one most people choose. In much of Europe, the default option is the opposite of the U.S. default--you are an organ donor unless you indicate you do not want to be, so most Europeans make the reverse choice [see box on opposite page].

When employers switch procedures for voluntary 401(k) participation from opt-in (you have to sign a form to contribute to the plan) to opt-out (you have to sign a form to decline participation) initial enrollments jump from 49 to 86 percent, according to a 2001 study by University of Pennsylvania economist Brigitte Madrian and Dennis Shea of the United Health Group. And in a real-world experiment, the states of New Jersey and Pennsylvania simultaneously started to offer lower-cost, no-fault auto insurance. These policies restrict the right to sue while requiring insurance companies to pay regardless of who is at fault in an accident. In New Jersey--but not in Pennsylvania--no-fault insurance was the default. As Columbia's Johnson and his colleagues reported in the Journal of Risk and Uncertainty in 1993, almost 80 percent of car owners in each state ended up with the default. The choice of default has cost Pennsylvanians millions of dollars over the years.

Why do defaults have such power? Some of it may come from inattention. Life is busy and complicated, and it is not possible to pay attention to everything. That is why most of us keep our cell phone plan whether or not it is the best one for us. Researching alternatives is time-consuming, and we do not want to be bothered. But laziness and inattention are not the sole reasons for the power of defaults. As University of California, San Diego, psychologist Craig R. M. McKenzie and his colleagues showed in a 2006 study, most people infer that the default is the recommended option.

Given the power of defaults, policymakers could use them to nudge people in a direction that will enhance their well-being, something University of Chicago legal scholar Cass R. Sunstein and economist Richard Thaler call "libertarian paternalism." In this practice, leaders would choose defaults with an eye on people's stated or implied preferences (the "paternalistic" part) while allowing anyone to opt out (the "libertarian" element).

Although you cannot always know what people's preferences are, you can often discern them. In the example of 401(k) plans, we can surmise a desire to participate because we know that people are more likely to sign up the longer they stay in their job, as if they have been meaning to do it but have been putting it off. Knowing whether Pennsylvanians or New Jersey residents are getting what they really want for car insurance is harder to determine. But given that it is nearly impossible to present options in a neutral fashion, why not prod people in a direction that makes most of them better off?

Matchmaker
Yet a third major influence of framing on choice is context. The attractiveness of an option will frequently depend on what it is compared with. Some years ago the gourmet food and kitchen gadget purveyor Williams-Sonoma introduced a new product: an automatic bread maker. You just throw the ingredients in, push a button, and several hours later you have a loaf of bread. The device sold for $275. Was $275 a lot to spend on a bread maker? That price was hard to judge because no similar products were then on the market. Months later Williams-Sonoma introduced a "deluxe" bread maker that sold for $429. Sales of the regular bread maker shot up--because the new, more expensive bread maker made the regular one look like a good deal.

Effects like this are pervasive. In research reported in 2002, University of Oregon psychologist Paul Slovic asked a group of people how much they would pay in taxes for an airport safety measure that would save 98 percent of 150 people at risk a year. Then he asked a second group how much they would pay to save 150 people a year. The first group would pay more for the measure than the second group would. Why? After all, saving 100 percent of 150 people is more beneficial than saving 98 percent of 150 people. But when the number 150 has no context, people will consider a broad variety of ways to spend money, many of them affecting thousands or millions of people. On the other hand, giving the 98 percent success rate restricts the context of the question and seems impressive, so people see intervention as quite cost-effective.

In another example of this phenomenon, Kahneman, Sunstein and their colleagues questioned a group of people about how much they would be willing to donate to a fund to reverse or prevent ecological disasters such as the loss of coral reefs and the endangerment of dolphins. Another group was asked how much they would be willing to pay to a program preventing skin cancer among farm workers. Surprisingly, the researchers found that people were willing to pay the same amount to save dolphins as to prevent skin cancer! But when they pitted dolphins and skin cancer directly against each other for a third group, the respondents were willing to spend vastly more money on skin cancer than on dolphins.

What is going on here? When people weigh saving dolphins against other ecological problems, dolphins rate high (they are so cute and so smart), so people will spend lots of money to save them. In contrast, skin cancer ranks low in priority on a list of serious health problems, so people choose to allocate relatively little money for it. But when dolphins and skin cancer appear on the same mental screen, people see skin cancer as much more worthy of resources. This change in public opinion occurs because when the options are framed narrowly, people decide within that limited context, comparing dolphin conservation only with other ecological issues and skin cancer only to other health issues. They lack a broad mental framework that could be used to contrast and evaluate divergent types of policies.

Thus, a more narrowly constructed question can raise a lower-priority project to greater prominence in people's minds, whereas if a public policy choice provides a more expansive framework, individuals can be subtly coaxed to reprioritize. Controlling the frame in a public policy debate can therefore sway the tide of public opinion in whatever direction the framers might prefer.

True Lies
All of this raises a key question: Do people actually know what they want? When faced with a decision, we imagine ourselves rationally considering our preferences and finding the option that best satisfies them. But research on how language affects decisions suggests otherwise. Instead of possessing preferences and values, we may simply create them when we are asked to make a decision. And, as we have seen, values and preferences can bend under the force of the question's wording. Thus, it is extremely difficult to discern people's "true" values and preferences, if they even exist.

Think about the public attitude toward the estate tax--a hefty tax on the assets of wealthy people when they die. This is a tax paid by a tiny handful of people--the most affluent group in the U.S. Yet a majority of Americans oppose it and support President George W. Bush's efforts to abolish it. What explains this peculiar public attitude? Is it that every American expects to be rich one day? I don't think so.

When Bush and his allies in Washington launched their campaign against the estate tax, they relabeled it the "death tax." Think of what this label does. Who pays the death tax? The dead person does. As if dying were not bad enough, the government reaches into the grave to extract its pound of silver. Worse yet, the dead person has already paid taxes on that money, when it was originally earned. Now suppose that instead of calling it a "death tax," we called it an "inheritance tax." Who pays the inheritance tax? The living do--and, unlike the dead, they have never paid taxes on these assets before. The same tax seems much more attractive and fair under that label.

So what do people really think about this tax? Such a seemingly straightforward question is actually exceedingly difficult to answer. When evaluating almost anything, we are at the mercy of its framing or context. We may search in vain for a neutral way to describe policies and products alike, and our failures will have significant effects on decisions of all types. If we are vigilant about monitoring how options are packaged, we might sometimes be able to diagnose framing effects and counteract them. But we will never catch them all.