IMAGINE the original job interview. The first one ever, back on the prehistoric savannas of eastern Africa or maybe in an early agrarian society in the Fertile Crescent. A member of an unknown settlement may have wandered in and offered some irresistible service—lion-wrangling expertise, perhaps, or Herculean strength in the field. Unlike in a modern job interview, early humans had no résumés, LinkedIn profiles or letters of recommendation to guide them. The fundamental idea, however, was the same: somehow the interviewer had to judge, in a brief interval, whether the applicant—a complete stranger—was trustworthy. Bringing on a sordid character as a business partner or as a steward of your goods could endanger your livelihood or even your personal safety.
To boost the odds of choosing a solid relationship and rejecting a dicey one, our ancestors might have learned to detect subtle, unintended signs in that initial, face-to-face interaction. Indeed, how do we make these judgments nowadays? Discerning the motives of strangers is a skill we rely on all the time. Every time you walk into a used-car lot or shop around for a home contractor or financial adviser, you are using your wits to pick someone trustworthy—and to avoid scoundrels.
Because trust and cooperation are so essential to the smooth working of human society, it makes sense that people would have learned over thousands of years both to send signals of trustworthiness and to pick up signs of malicious intent. Yet scientists have searched in vain for that single “golden cue” that predicts future cooperation or opportunism. Now a growing consensus rejects the idea of a single, isolated nonverbal signal of trustworthiness—or deceit—as simplistic. Rather than a certain grimace or gesture giving intentions away, a subtle constellation of clues may emerge dynamically during brief encounters. We sense this cluster of behaviors without realizing it and use them to judge a person's integrity.
New research from psychological scientist David DeSteno of Northeastern University explored this idea with a fresh technological approach. Working with a large team of collaborators at the Massachusetts Institute of Technology, Cornell University and his own institution, DeSteno ran a two-part experiment to identify the intertwined nonverbal cues that warn of opportunism in others. In the first part of the study, the scientists videotaped
strangers conducting their first conversation together, either face-to-face or in a typed Web chat. The researchers guessed that if a set of nonverbal cues can indeed convey trustworthiness consistently, people should be better at judging others’ intentions face-to-face.
The pairs of unacquainted students chatted for five minutes about ordinary topics such as spring break, life in Boston, and so forth. Other students had similar chats via the Internet, the only restriction being that they could not use emoticons, those symbols that convey emotion in online conversations. Then all the pairs played a game that measures cooperative and self-interested economic behavior. As expected, those who had chatted face-to-face beforehand were more accurate in predicting the trustworthiness or sleaziness of the stranger. Something in the interaction—some nonverbal information that was missing from the text-only Web chat—had given away their opponents' intentions.
But what? To find out, the scientists asked two independent judges to analyze the videotaped interactions and identify all the possibly meaningful cues: smiling, laughing, leaning, looking away, crossing the arms, nodding, head shaking, and touching. Next they isolated the specific cluster of cues that were present when volunteers successfully detected others' self-serving intentions. Again and again, the opportunists displayed a cluster of four cues: hand touching, face touching, crossing arms and leaning away. None of these cues foretold deceit by itself, but together they transformed into a highly accurate signal. And the more often the participants used this particular set of gestures, the less trustworthy they were in the subsequent financial exchange.
This finding was intriguing but inconclusive. After all, people are constantly twitching and shifting, so it is difficult to know if this specific cluster of cues—and only these cues—are the ones involved in signaling duplicity. To test this more rigorously, the scientists needed to experimentally manipulate the suspect motions and then see if they did indeed inspire feelings of distrust.
Enter Nexi, a robot especially designed to mimic human expressiveness. In the second phase of the study, it replaced one of the partners in each pair. The human partner had a 10-minute “conversation” with Nexi, again about mundane topics. The scientists meanwhile operated Nexi in Wizard of Oz fashion, making it lean back, touch its face and hands, and cross its arms. All Nexi's cues were derived from examples of human motion to make them as authentic as possible. The order varied, with some cues repeated, to simulate human fidgeting.
Other volunteers also chatted with Nexi for 10 minutes, but during these conversations Nexi used gestures other than the target movements. As reported in a forthcoming issue of the journal Psychological Science, when Nexi used the target gestures—but not when it made other humanlike movements—the volunteers reported feelings of distrust toward the robot. What's more, when they played the economic exchange game with Nexi, these volunteers expected to be treated poorly and behaved less cooperatively with the robot.
Interestingly, these results were narrowly focused on trust. That is, even when Nexi's body language made people skeptical of its motives, the study participants did not necessarily dislike it, according to their subsequent reports of their feelings toward it. This is a familiar human experience: many of us know individuals whom we like well enough but would never, ever trust with our money.