This finding was intriguing but inconclusive. After all, people are constantly twitching and shifting, so it is difficult to know if this specific cluster of cues—and only these cues—are the ones involved in signaling duplicity. To test this more rigorously, the scientists needed to experimentally manipulate the suspect motions and then see if they did indeed inspire feelings of distrust.
Enter Nexi, a robot especially designed to mimic human expressiveness. In the second phase of the study, it replaced one of the partners in each pair. The human partner had a 10-minute “conversation” with Nexi, again about mundane topics. The scientists meanwhile operated Nexi in Wizard of Oz fashion, making it lean back, touch its face and hands, and cross its arms. All Nexi's cues were derived from examples of human motion to make them as authentic as possible. The order varied, with some cues repeated, to simulate human fidgeting.
Other volunteers also chatted with Nexi for 10 minutes, but during these conversations Nexi used gestures other than the target movements. As reported in a forthcoming issue of the journal Psychological Science, when Nexi used the target gestures—but not when it made other humanlike movements—the volunteers reported feelings of distrust toward the robot. What's more, when they played the economic exchange game with Nexi, these volunteers expected to be treated poorly and behaved less cooperatively with the robot.
Interestingly, these results were narrowly focused on trust. That is, even when Nexi's body language made people skeptical of its motives, the study participants did not necessarily dislike it, according to their subsequent reports of their feelings toward it. This is a familiar human experience: many of us know individuals whom we like well enough but would never, ever trust with our money.