Imagine the original job interview. The first one ever, back on the prehistoric savannas of eastern Africa or maybe in an early agrarian society in the Fertile Crescent. A member of an unknown settlement may have wandered in and offered some irresistible service—lion-wrangling expertise, perhaps, or Herculean strength in the field. Unlike in a modern job interview, early humans had no résumés, LinkedIn profiles or letters of recommendation to guide them. The fundamental idea, however, was the same: somehow the interviewer had to judge, in a brief interval, whether the applicant—a complete stranger—was trustworthy. Bringing on a sordid character as a business partner or as a steward of your goods could endanger your livelihood or even your personal safety.
To boost the odds of choosing a solid relationship and rejecting a dicey one, our ancestors might have learned to detect subtle, unintended signs in that initial, face-to-face interaction. Indeed, how do we make these judgments nowadays? Discerning the motives of strangers is a skill we rely on all the time. Every time you walk into a used-car lot or shop around for a home contractor or financial adviser, you are using your wits to pick someone trustworthy—and to avoid scoundrels.
Because trust and cooperation are so essential to the smooth working of human society, it makes sense that people would have learned over thousands of years both to send signals of trustworthiness and to pick up signs of malicious intent. Yet scientists have searched in vain for that single “golden cue” that predicts future cooperation or opportunism. Now a growing consensus rejects the idea of a single, isolated nonverbal signal of trustworthiness—or deceit—as simplistic. Rather than a certain grimace or gesture giving intentions away, a subtle constellation of clues may emerge dynamically during brief encounters. We sense this cluster of behaviors without realizing it and use them to judge a person's integrity.
The pairs of unacquainted students chatted for five minutes about ordinary topics such as spring break, life in Boston, and so forth. Other students had similar chats via the Internet, the only restriction being that they could not use emoticons, those symbols that convey emotion in online conversations. Then all the pairs played a game that measures cooperative and self-interested economic behavior. As expected, those who had chatted face-to-face beforehand were more accurate in predicting the trustworthiness or sleaziness of the stranger. Something in the interaction—some nonverbal information that was missing from the text-only Web chat—had given away their opponents' intentions.
But what? To find out, the scientists asked two independent judges to analyze the videotaped interactions and identify all the possibly meaningful cues: smiling, laughing, leaning, looking away, crossing the arms, nodding, head shaking, and touching. Next they isolated the specific cluster of cues that were present when volunteers successfully detected others' self-serving intentions. Again and again, the opportunists displayed a cluster of four cues: hand touching, face touching, crossing arms and leaning away. None of these cues foretold deceit by itself, but together they transformed into a highly accurate signal. And the more often the participants used this particular set of gestures, the less trustworthy they were in the subsequent financial exchange.
This finding was intriguing but inconclusive. After all, people are constantly twitching and shifting, so it is difficult to know if this specific cluster of cues—and only these cues—are the ones involved in signaling duplicity. To test this more rigorously, the scientists needed to experimentally manipulate the suspect motions and then see if they did indeed inspire feelings of distrust.