“The face is the index of the mind,” according to an ancient proverb. People with autism, however, are often unable to judge when a face conveys emotions such as happiness or sadness, and many researchers take this as evidence that autism involves serious deficits in processing social information. Yet the voice, too, can provide emotional cues, and several recent studies indicate that when listening to voices, people with autism can actually recognize feelings and other traits of humanness as well as—or even better than—neurotypical people do.
The studies were small and focused exclusively on high-functioning adults with autism, whose abilities are not necessarily representative of the broader autistic population, points out Andrew Whitehouse, head of autism research at Telethon Kids Institute in Australia. And success on a laboratory task does not necessarily translate into success in real-world social interactions, adds Helen Tager-Flusberg, a professor of psychological and brain sciences at Boston University. Nevertheless, the studies suggest that at least for some subgroups of autistic people in certain situations, deficits in identifying emotions could be confined primarily to vision. “This is great news from a treatment perspective,” says Kevin Pelphrey, director of the Autism and Neurodevelopmental Disorders Institute at George Washington University. “It is much easier to help someone overcome an inability to read emotion from faces than it would be to treat a fundamental lack of understanding of emotion from all modalities.”
Three Studies on Autism and Emotion
Daniel Javitt of the Nathan Kline Institute for Psychiatric Research in New York State and his colleagues displayed photographs of faces expressing happiness, sadness, fear or anger. The 19 participants with autism did a poor job of identifying these emotions. But when researchers played audio clips of voices conveying similar feelings, these same participants identified the relevant emotions just as well as a control group. The results were published in August in the Journal of Psychiatric Research.
Neuroscientist Tamami Nakano of Osaka University in Japan and her colleagues asked participants to rate real and computer-generated singing voices. Although the performance of the autistic and control groups differed for the real voices, the 14 participants with autism nonetheless gave the artificial voices the same low ratings for humanness and emotional qualities as their neurotypical peers did. The results were published in August in Cognition.
A team led by I-Fan Lin of Tokyo Metropolitan University measured how quickly people could judge whether or not a particular sound came from a human. (Audio examples included a violin playing a note and a person pronouncing the vowel “i.”) The 12 participants with autism not only performed the task faster than their neurotypical peers, they also did it better, readily responding to human voices even when important acoustic components were missing. The results were published online in May in Scientific Reports.