There are three things extremely hard: steel, a diamond, and to know one's self.

--Benjamin Franklin

A teenage violinist applies to music school based on her notions of her musical virtuosity. A military officer volunteers to command a dangerous mission because he is confident about his bravery, leadership and grace under pressure. A healthy elderly woman decides not to get a flu shot because she feels that it is unlikely she will fall ill.

Over their lifetimes, people base thousands of decisions on the internal pictures they hold of their own skills, knowledge, personality and moral character. During decades of research, psychologists have examined just how accurate these self-perceptions are in a wide variety of tasks and circumstances. In study after study, researchers find that self-ratings of aptitude hold only a tenuous to modest relation, at best, with actual performance--indeed, other people can often foresee an individuals outcomes better than that person can. Individuals also overrate themselves. As a consequence, the average person claims to be above average in skill--a conclusion that, in aggregate, defies statistical possibility. He or she also overpredicts the likelihood of engaging in desirable behaviors and achieving favorable outcomes, furnishes excessively optimistic estimates of when he or she will complete future projects, and reaches judgments with too much confidence. The findings have important consequences for health, education and the workplace [see boxes].

Swelled Heads

How far off are self-judgments? Peoples notions about their intelligence tend to correlate only 0.2 to 0.3 with performance on intelligence tests and other academic tasks. (Correlation measures the direction--positive or negative--and extent--from +1 to 1--of the relation between two scores. For example, the correlation between gender and height is roughly 0.7.) College students ratings of academic self-efficacy during their first year correlate only 0.35 with their instructors evaluations. In the workplace, the correlation between how people expect to perform and how they actually do hovers around 0.20 for complex tasks.

People in some domains do better than others. In athletics, where critiques from coaches and others who have an outside perspective tend to be constant, immediate and unambiguous, the typical correlation is 0.47. In the realm of complex social interactions, however, where feedback might be occasional, often delayed and ambiguous, it tends to be much lower--for instance, just 0.04 for self-assessment of managerial competence and 0.17 for interpersonal skills.

Acquaintances may predict a persons performance in some situations better than the person himself or herself can. As Donald A. Risucci of New York Medical College and his colleagues put forth in a 1989 study, although the self-views of surgical residents are not related to their performance on standardized board exams, their supervisors ratings are strongly related, as are the ratings of their peers who are equally inexperienced. And in a 1991 study by Bernard M. Bass and Francis J. Yammarino of Binghamton University, peer ratings of leadership, rather than self-ratings, predict which naval officers will be recommended for early promotion.

People also show in many different ways how they hold inflated views of their expertise, skills and character. Consider the tendency for the average person to see himself or herself as above average. In a 19761977 College Board survey of nearly one million high school seniors, 70 percent claimed to have above-average leadership skills, and only 2 percent gave themselves below-average marks. On their ability to get along with others, almost all respondents rated themselves as at least average--with 60 percent rating themselves in the top 10 percent of this ability and 25 percent rating themselves in the top 1 percent.

Students have no monopoly on such above-average effects. Motorcyclists believe they are less likely to cause an accident than the typical biker. Business leaders believe their company is more likely to succeed than the average firm in their industry.

Individuals also demonstrate inflated estimates of self when they assess how quickly they will complete tasks, a phenomenon known as the planning fallacy. For example, Roger Buehler of Wilfrid Laurier University in Ontario and his colleagues reported in a 1994 study that college students take three weeks longer to finish their senior thesis than the most realistic estimate that they give for the task--and one week longer than what they describe as their worst case scenario. In a similar vein, in 1997 Buehler, Dale W. Griffin of the University of British Columbia and Heather MacDonald, then at Simon Fraser University in Burnaby, B.C., found that citizens typically believe they will complete their tax returns more than a week sooner than they actually do.

Indeed, even when people are most confident, that conviction is no guarantee of accuracy. In 1977 studies by Baruch Fischhoff of Carnegie Mellon University, Paul Slovic of the University of Oregon and Sarah Lichtenstein, then at Decision Research in Eugene, Ore., a center for decision-making research, college students who expressed 100 percent certainty in their answers were still wrong roughly one time out of every five. In a 1981 study, when doctors diagnosed their patients as suffering from pneumonia, predictions made with 88 percent confidence turned out to be right only 20 percent of the time, according to Jay Christensen-Szalanski of the University of Iowa and James B. Bushyhead of Minor & James Medical, a group practice in Seattle.

What Goes Wrong?

A wide variety of psychological mechanisms underlie these flawed self-assessments, and it would be difficult, if not impossible, to catalogue them all in a single article. Yet if we confine ourselves to two of the most widely documented biases--above-average effects and the overprediction of desirable events--we can describe two general underlying themes. The first is that people typically do not possess all the information required to reach reliably accurate self-assessments. Too many factors are unknown, unknowable or even undefinable for people to make accurate evaluations of self-performance or forecasts about how they will act in the future. Second, in those cases where valuable information that would help guide toward appropriate self-evaluations is in hand, people often neglect that information or give it too little weight, thus leading them toward error.

Consider, first, the above-average effect. People often do not have the knowledge and expertise necessary to adequately assess how their competence stacks up against that of their colleagues--and the most incompetent are frequently the most prone to err in their personal judgment. Incompetent individuals suffer a double curse: their deficits cause them to make errors and also prevent them from recognizing what makes their decisions erroneous and the choices of others superior.

Several studies have now demonstrated that incompetent individuals fail to show much insight into their deficiencies. College students scoring in the bottom 25 percent on a course test routinely walked out of the exam room thinking they had outperformed a majority of their peers, according to a study by one of us (Dunning), Justin Kruger and Kerri L. Johnson, both at New York University, along with Joyce Ehrlinger of Cornell University. In a 2001 study at the University of Toronto by Brian D. Hodges and Glenn Regehr, medical students mishandling a mock interview with a patient rated their interviewing skills much higher than their instructors did.

In addition, missing information feeds overprediction of good performance. By definition, people are not aware of solutions they could have generated but missed--that is, their errors of omission. For example, suppose we asked you to make as many English words as you can from the letters constituting the word spontaneous (tan, neon, pants and so on), and you find 50. Whether this number is good or bad depends, in part, on how many words can actually be found in spontaneous, and it is difficult to expect anyone to have an accurate intuition of what that figure is. In fact, the letters in spontaneous can spell more than 1,300 English words.

In the absence of complete feedback, people can harbor inflated views about the wisdom of their actions. Suppose an office manager takes a poorly performing employee aside and chews him out. The next day that employee does better--providing evidence for the sagacity of the office managers intervention. Yet the manager does not know what might have been achieved by other alternatives, such as sitting down with the employee for a sympathetic talk or even doing nothing. Maybe these alternatives would have worked as well, or even better, but the manager will never know.

Perhaps most fundamentally, success in certain spheres is harder to define than in others. As a consequence, people regularly believe themselves to be above average on traits that are ill defined but not on ones whose definition is narrower. For example, as Dunning, Judith A. Meyerowitz and Amy D. Holzberg, then all at Cornell, found in 1989, people may say that they are more sophisticated, idealistic and disciplined than their peers (ambiguous traits, all) but not that they are any more neat, athletic and punctual (traits that are more constrained in their meaning).

Although humans naturally like to see how they stack up, people also misjudge their skills in relation to others by ignoring crucial information or by focusing exclusively on themselves. When evaluating their skill vis--vis their peers, individuals are egocentric, thinking primarily of their own behaviors and attributes and ignoring those of others, according to Kruger. Thus, peoples comparative judgments often paradoxically involve very little actual comparison. Ask them how well they can ride a bicycle compared with other cyclists, and they say they do so quite well--mostly dwelling on how they have no trouble riding but forgetting that others have no difficulty, either. But ask them about their juggling ability, and they describe themselves as worse than average--neglecting again that others are also lousy jugglers.

This egocentrism leads people to make irrational choices. College students, for example, prefer to compete with others in a trivia contest focusing on Adam Sandler movies (an easy subject for them) than to compete in one on 19th-century French painting (a hard topic), forgetting that what is easy or difficult for them probably would be equally easy or difficult for their competitors. People bet more in poker games with a large number of wild cards in the deck because they are more likely to have a good-looking hand. But wild cards do not play favorites, and other players are equally advantaged as the number of wild cards expands.

Mispredictions of the future, usually overoptimistic ones, also arise because people do not have all the information they need to make more accurate forecasts. But more than that: individuals often cannot hope to have all the information they need--and thus they should proceed with caution whenever making a prediction about their future behavior. If someone walked up to you on the street to ask you to donate money to a charity, would you do it? Your actual behavior depends on any number of situational features you are not in a position to know until you are actually in the moment. Does the person asking look meek or menacing? Do you have time, or are you late to an appointment? Is it sunny or raining? Do you have any small bills in your pocket? Is the charity one you respect? Any of these details can influence whether you give or not, but you do not know which details will turn out to be true until you finally face the situation for real. People make overly confident predictions about their future behavior because they fail to correct for the fact that such important details of future situations are often unknown or unpredictable.

People may also have difficulty predicting how they will respond to circumstances that have significant emotional or visceral components. For example, office workers approached just after lunch predict that the next week at 4 P.M. they would prefer to receive a healthy snack, such as an apple, rather than some junk food, even though they know (intellectually) that they tend to be hungry late in the afternoon. When next week arrives, they in reality tend to prefer the calorie-laden junk food over the healthy fruit that they predicted they would want, as reported in 1998 by Daniel Read of the London School of Economics and Political Science and Barbara van Leeuwen, then at Leeds University Business School. In short, men and women fail to adequately anticipate how emotional or visceral factors (such as hunger) will influence their behavior if they are not feeling those factors at the moment they make their predictions. Thus, when they are in a cold (logical) state, they mispredict how they will react when in a hot (emotional or visceral) state.

Getting the Picture

Can we get a better perspective on ourselves? One general solution is to consciously take what is called the outside view rather than an inside one. People adopting an inside view focus on the internal dynamics of a situation as well as their own personal dynamics and then spin a story of what they are likely to do or accomplish in a given situation. Adopting an outside view means setting aside storytelling and focusing instead on data. When predicting what they are likely to do in the future, they should simply ask themselves what they have tended to do in the past, as well as take into account what has happened to others who have faced similar situations. For example, in one study students were asked to estimate when they would complete an academic task, and they guessed that they would complete it about four days in advance of the deadline (a goal that only about 30 percent achieved). Yet when asked when they had typically accomplished such tasks in the past, they admitted that they usually finished only one day before the deadline--and this time frame turned out to hold true for the project they were predicting. Similarly, a random sample of Canadian taxpayers thought that this year they would mail in their return about a week earlier than usual--but they generally completed their returns about when they had in previous years.

In a telling 2003 study Daniel Lovallo of the University of New South Wales in Australia and Daniel Kahneman of Princeton University described a group of academics working on revising the curriculum of a local school system. When members were asked to predict how long it would take the group to finish the job, the single most pessimistic prediction was about two and a half years. On questioning, one member of the group did concede that, in his extensive experience, it usually took such groups seven years at best to complete their task, if they did it at all. The group ultimately wrapped up its work eight years later.

In sum, a wealth of evidence suggests that people may err substantially when they evaluate their abilities, attributes and future behavior. That said, we feel that the psychological literature has painted only a few brushstrokes of its portrait of the person as self-evaluator--and much more work must be done to fully render it. Perhaps more important, we need to develop a second image--one that depicts what an individual looks like when he or she has achieved an accurate impression of his or her talents, capacities and character. How one retouches the first portrait to create the second is an issue that requires much more theoretical and empirical work.