ADVERTISEMENT

How Data Beats Intuition at Making Selection Decisions

An algorithm beats the experts at hiring
job interviewers


Even expert interviewers can do poorly.
Credit: Thinkstock

When we make selection decisions – whether it is choosing a date, a potential business partner or a job candidate – we try our best to make accurate judgments about the potential of the people we are considering. These decisions, after all, have long-term consequences. A first date could turn into a long-lasting romantic relationship; a potential business partner could be a lifelong colleague; a job candidate could be someone we work with for years to come.

Yet, too often, we find ourselves asking, “What went wrong?” We may have spent a lot of time with the person and conducted multiple interviews and assessments to then realize, a few months later, that the person we chose is just not right. This is no rare event. For instance, data shows that traditional hiring methods produce candidates that meet or exceed the expectations of the hiring manager only 56 percent of the time — about the same result one would get tossing a coin.

We are generally very clear on what we are looking for: we can specify what the position requires and ask the candidate for information on the dimensions that seem relevant. But when it comes to evaluating all the data we collected problems arise. A recent meta-analysis by Kuncel, Klieger, Connelly and Ones found that, across multiple criteria in work and academic settings, when people combined hard data with their judgments, and those of others, their predictions were always less valid, and less predictive of real outcomes, than those generated by hard data alone. This was true even when the judgments were made by experts who were knowledgeable about the jobs and organizations in question. In predicting job performance, for instance, the predictions of hard data outperformed a combination of data and expert judgment by 50 percent. In evaluating candidates applying to jobs (and no matter what the job was), the team of researchers found that a simple algorithm outperformed human judgment by over 25 percent.

What this research suggests is that relying on the most objective data available and using algorithms to interpret it to make selection decisions beats our intuition. By far.

By relying on intuition, in fact, we can make biased decisions. To take one example, we tend to infer someone’s ability directly from his performance without adequately adjusting for the situation in which he has operated, a systematic error known as the correspondence bias. For instance, when evaluating which employees to promote, a manager might focus exclusively on their success and fail to adjust for the difficulty of their past assignments. Similarly, we might judge our leaders without factoring in market conditions, political challenges, and so on.

This type of error was demonstrated in a study I conducted with Don Moore of the University of California at Berkeley and Sam Swift and Zachariah Sharek of Carnegie Mellon University. Assuming the role of admissions officers for a selective MBA program, U.S. college students were presented with candidates’ grade point averages and the average GPA at the college each attended. When deciding whom to admit, the participants overweighed applicants’ GPAs and underweighted the effect of the grading norms at different schools. In other words, they did not appropriately account for the relative ease with which candidates earned their grades.

Many other systematic errors can influence our judgments and lead to poor selection decisions. For example, Uri Simonsohn and I analyzed data from 9,000 interviews of MBA candidates conducted at business schools over the course of a decade. We found that strong candidates interviewed earlier in the day were more likely to be accepted than those interviewed later in the day. Once an interviewer gave several favorable scores to earlier applicants, that interviewer’s subsequent scores for other applicants were likely to be lower. Interviewers were averse to judging too many applicants high or low on a single day, a bias that disadvantages candidates who happen to show up on days with especially strong applicants. This error, which held true even after we considered differences among the applicants and their interviews, was committed by experts who have been doing the job for years, day in and day out.

When our judgments and predictions of others are incorrect, the negative consequences for individuals, organizations, and society can be serious. Admitting a student who is not prepared and who later fails, hiring an employee who disrupts the workplace, and advancing a reckless executive to the role of CEO are just a few examples.

“To know that we know what we know, and that we do not know what we do not know, that is true knowledge,” Confucius once said. Algorithms using objective data lead to much greater accuracy in predicting widely valued outcomes such as job and academic performance. A true expert is someone who knows what they do not know—namely, that our intuition can fail us.

Are you a scientist who specializes in neuroscience, cognitive science, or psychology? And have you read a recent peer-reviewed paper that you would like to write about? Please send suggestions to Mind Matters editor Gareth Cook, a Pulitzer prize-winning journalist at the Boston Globe. He can be reached at garethideas AT gmail.com or Twitter @garethideas.

Rights & Permissions
Share this Article:

Comments

You must sign in or register as a ScientificAmerican.com member to submit a comment.
Scientific American Holiday Sale

Black Friday/Cyber Monday Blow-Out Sale

Enter code:
HOLIDAY 2014
at checkout

Get 20% off now! >

X

Email this Article

X