[Below is the original script. But a few changes may have been made during the recording of this audio podcast.]
In this month’s issue of Perspectives on Psychological Science a psychologist and artificial intelligence researcher speculate on the psychological impact of relating to incredibly humanlike robots, if they exist, 50 years from now.
If we imagine that we’ve solved the remaining challenges facing artificial intelligence and robotic function—like computer vision and locomotion, among others—what sort of life will we have?
Imagine, for instance, dealing with an airline agent about a canceled flight but not knowing if they were a real human or android.
The so-called sentience ambiguity is uncomfortable. The researchers say that one way to deal with sentience ambiguity is an angry attack. (Hmmm, how different might that be from dealing with a real human about a canceled flight?)
That note aside the researchers say this may be the quickest way to determine whether you’re dealing with a robot. Throw out some highly emotionally charged statement. After all it is an apparently age-old method of establishing status.
Additionally, the AI required to grasp human emotion may forever be the telling distinguishing feature. The authors of the paper then write: if anger turns out to be an effective measurement for “human” will a culture of androids be an angry one?
They also wonder if we treat androids as the outgroup that they’ll inevitably be, will the negative stereotypes formed about them hinder any positive interaction? One thing we can be sure of, however, is that we won’t make them angry or hurt their feelings unless of course we design them to react to discrimination.