These are fascinating data. But, the researchers are quick to point out, they are correlational. There’s no way of knowing for sure whether these particular cues just happen to relate to selfishness in this context. What would constitute more compelling evidence that there is a trustworthiness signal? Ideally, we could program a human to either display the set of cues (or not) and then see how this influences judgments of trust. This would be strong experimental evidence. But you can’t program humans — after all, we’re not robots. Luckily for these researchers, robots are robots.
Meet Nexi, the newest creation of the Personal Robots Group at MIT. Nexi is a social robot – able to express a range of emotions and expressions in order to meaningfully interact with humans (in a way that does not creep them out). When turned off, Nexi isn’t much more than a big-eyed hunk of metal with wheels for legs. But flip the switch and Nexi comes to life with human-like dexterity and mannerisms that compel us to see a mind in the machine.
Conveniently for the researchers, Nexi can be programmed to exhibit specific sets of behavioral patterns during interactions with humans. The perfect experimental manipulation. Will participants who interact with Nexi trust the robot less when it exhibits the set of nonverbal cues identified in Study 1? That is, will participants judgments of and behavior towards Nexi in the economic game be influenced by the robots expression of these (vs. other) cues? Yes indeed. Participants trusted Nexi significantly less when she was programmed with the human nonverbal signals of selfishness. And it’s not that participants liked Nexi less when exposed to those nonverbals – they liked the robot just as much as when the cues were absent. Their presence was exclusively related to participants’ trust.
This line of research vindicates our instincts about those with whom we interact. When we “just have a feeling” about someone, we can be right. So, then, what to make of Neville Chamberlain? Was he particularly bad at reading nonverbal cues? The picture complicates when we interact with individuals who are motivated to conceal their true inclinations. Importantly, the participants in this study did not know they would be playing an economic game with each other while interacting. They had no reason to try and deceive their partners. This is not always the case in the real world. What would have happened if participants knew they would be playing the game before their interaction? Surely, the untrustworthy would have attempted to appear trustworthy. How successful would they have been? Would their partners still have been able to identify them in spite of their attempts? What if they knew which nonverbal cues to avoid displaying? The uncomfortable implication of these findings is that the more we uncover about the dynamics of trust, the more we learn about how to effectively deceive.
Are you a scientist who specializes in neuroscience, cognitive science, or psychology? And have you read a recent peer-reviewed paper that you would like to write about? Please send suggestions to Mind Matters editor Gareth Cook, a Pulitzer prize-winning journalist at the Boston Globe. He can be reached at garethideas AT gmail.com or Twitter @garethideas.