It’s tough to pick a familiar face out of a crowd—but focusing on a known voice in a noisy room is easy. And a new study scanned volunteers’ brains to look at how we solve the so-called cocktail party problem. The work is in the journal Nature. [Nima Mesgarani and Edward F. Chang, "Selective cortical representation of attended speaker in multi-talker speech perception"]
Researchers recorded the activity of the subject’s cerebral cortexes while playing them sentences spoken by different voices. First, the subjects listened to individual sentences and reported key features of each one. Then, they heard two different sentences played at the same time, but had to listen to and recall details from only one voice.
Each voice drew a particular response from the auditory cortex. And even with an extra sentence playing simultaneously, researchers saw that the cortex responded specifically to the voice that the subject was focusing on. This finding indicates that our brains process sound based not only on the audio input they receive, but also on our listening goals. And it could lead to speech recognition systems that are accurate in crowds—even at a cocktail party.
[The above text is a transcript of this podcast.]
[Scientific American is part of Nature Publishing Group.]