When someone approaches you to ask, “What’s wrong?” you know that you are broadcasting unhappiness, whether or not you said a word. Perhaps it was a grimace or your sluggish gait that conveyed the message. You cannot help but communicate your mood to colleagues, neighbors and fellow commuters through numerous subtle cues.
Sensing the emotional states of others is an important part of social interaction. If you could not do this well, you might end up incongruously slapping the back of a person who is teary or stopping an anxious co-worker on his way to a meeting. People with autism and schizophrenia find it virtually impossible to detect other people’s feelings and as a result have extreme difficulty relating to others.
Being a master of these social hints is critical to success in many domains. You can solidify friendships by recognizing when a person is sad and doling out appropriate comfort, for example. To succeed in business, you also need to accurately detect the sentiments of other people when pitching a new idea or deciding when to ask for a promotion. National security can even hinge on sensing emotions. In the U.S., millions of dollars are spent every year on training law-enforcement and security officials to read feelings in people’s faces. Suspects who are faking, say, regret or calm might, after all, be hiding a criminal act or the intention to commit such an act.
In the past, scientists focused largely on the muscles of the face and a region of the brain responsible for detecting facial features. Lately, however, researchers have found that contextual cues—including a person’s posture, the tone of his or her speech, and the attitudes of bystanders—are critical to emotion perception. By pinpointing the regions of the brain that subconsciously assemble those clues within milliseconds, scientists are now beginning to understand how our senses shape our social skills.
Face First
In pioneering studies on emotion perception back in the 1970s, psychologist Paul Ekman and Wallace V. Friesen, then both at the University of California, San Francisco, classified expressions by what they called “facial action units,” which consist of combinations of physical changes in the face. For example, to generate a smile we raise the sides of our mouth and contract muscles that create wrinkles at the corners of our eyes. Some two decades later psychologist Nancy Kanwisher, now at the Massachusetts Institute of Technology, and her colleagues identified a blueberry-size region in the brain, the fusiform face area (FFA), that responds specifically to faces [see “A Face in the Crowd,” by Nina Bublitz; Scientific American Mind, April/May 2008].
In reading the emotions of others, the FFA collaborates with the amygdala, a processor of emotions. In 2001 neurologist Patrik Vuilleumier of the University of Geneva and his colleagues found that an individual’s amygdala responds to the appearance of fearful expressions even if that person is paying attention to something else. The FFA also responded more strongly to fearful than neutral faces, suggesting that the amygdala sends feedback that can augment the firing of neurons there.
Yet researchers now know that faces alone do not always betray feelings with great fidelity. As a result, we typically evaluate an expression’s context, including body posture, surrounding faces and tone of voice. The combination, it turns out, makes our judgments more reliable. Faces that in isolation appear contorted in disgust look proud when they are attached to a muscular physique with arms raised in triumph. What seems like a scowl may instead signal fear if it accompanies a description of danger. In the close-up of tennis player Serena Williams’s face in the left illustration above, she looks either angry or pained. But zoom out, and you see she is clearly triumphant after a big win at the 2008 U.S. Open.
The more ambiguous the expression, the more we look to other information. Researchers have begun searching for regions of the brain that can interpret all the incoming data—and then solicit more, if necessary. Neurons in such “convergence zones” would need to respond to more than one type of sensory cue—sound as well as sight, for example—and identify them as arising from a common source, taking the first step toward gaining insight into another person’s mind.
Sensory Switchboards
In a study published in 2000 psychologist Randy L. Buckner, then at Washington University in St. Louis, and his colleagues found evidence for one such zone. The researchers exposed volunteers to word fragments either by displaying them on a screen or by playing their sounds. The scientists asked them to assemble the pieces into words as quickly as possible while inside a brain scanner. Regardless of whether the subjects saw letters or heard their sounds, the words came faster when the fragment was presented a second time. Accordingly, parts of the prefrontal cortex charged with forming abstract thoughts reacted to repeats more weakly than they did to novel fragments, which suggests a boost in brain efficiency the second time around. Because these regions showed the same response for both visual and auditory input, they satisfied the criteria for a region that could integrate different streams of sensory information to yield an overall impression of an object or scene.
Analogous brain regions seem to assimilate emotional stimuli. In a 2010 study Vuilleumier and his colleagues monitored brain activity while volunteers viewed or listened to actors expressing five different emotions: anger, disgust, happiness, fear or sadness. The actor expressed each emotion with his or her body (and the face obscured), face (with the body out of view) or tone of voice (with no visual input). The participants then rated how intensely they thought the actor was feeling the emotion portrayed.
The researchers were able to pinpoint two brain regions whose responses appeared to represent the feeling rendered independent of whether the face, body or voice conveyed the mood. These were the medial prefrontal cortex, a part of the social brain involved in understanding others’ intentions, and the superior temporal sulcus, a groove in the temporal lobe involved in perceiving biological motion and the direction of a person’s gaze. These cerebral hotspots may serve as part of the switchboard that gathers and analyzes data relevant to recognizing emotion in others.
Odor perceptions seem to join other sensory data to form a swift impression of a person’s feelings. In a 2010 study one of us (Seubert), then working in Ute Habel’s group at RWTH Aachen University Hospital in Germany, and our colleagues decided to analyze how the brain registers disgust, which can be difficult to recognize by a face alone. We asked people to identify feelings from pictures of expressive faces—disgusted, happy or neutral—while inside an MRI scanner. Along with the pictures, participants were exposed to either pleasant or repulsive odors piped to their nose through narrow tubes.
If an unpleasant odor accompanied a disgusted expression, people recognized the revulsion much faster than they did with the face alone. As expected, odors did not speed up recognition of happiness. We found that the presence of an unsavory odor diminished activity in the FFA, suggesting that smell helps the brain process emotions more easily. We found similar decrements in responsiveness in prefrontal brain areas and in the insula, which encodes disgust. Because sights and sounds also activate regions of the prefrontal cortex, these results bolster the idea that the brain contains a network of regions responsible for weaving together the emotional messages embedded in several types of sensory data.
Lower Thoughts
Not all of that sensory blending occurs at a high level in the brain, however. More basic cross talk between senses may also take place; for example, regions dedicated to sound perception may respond to the sight of moving lips. In 2002 a team led by psychologist Sophie Molholm of the Nathan S. Kline Institute for Psychiatric Research in Orangeburg, N.Y., reported detecting brain-wave patterns indicative of early interactions between sensory components. The researchers asked volunteers to press a button as soon as they either saw a circle on a screen or heard a high-pitched tone. In some instances, a circle was accompanied by the tone. When the stimuli were simultaneous, people reacted significantly faster. The combination of sight and sound boosted the amplitude of a particular brain wave that appears within 50 milliseconds of a novel stimulus, beyond what the sum of the equivalent individual visual and auditory signals produced. Because a neural message from the eyes requires at least 50 milliseconds to travel to the first stage of processing in the brain, these results suggest that visual and auditory cues combine long before they reach the front of the brain.
In light of this and other evidence, scientists believe the brain deciphers emotional content in several stages. Its quick and dirty assessment, orchestrated largely by the amygdala, can combine related stimuli to initiate gut responses when a situation requires immediate action. Later, frontal brain regions may perform a more detailed analysis to guide more deliberate behavior.
Whatever goes on in the brain, knowing that emotion perception involves knitting together an array of sensory input may help us read others more accurately. Software that can interpret the emotional cues in facial expressions and tones of speech already exists, and in the near future these technologies or other types of training regimens could help teach autistic individuals, people with schizophrenia or others who are poor at detecting feelings what to look for in social situations. For the rest of us, we should be aware that getting a good handle on another person’s mood may mean taking a step back to see what that smirk, smile or furrowed brow really means. The posture, manner of speaking or aroma that accompanies that facade could tell us all we need to know.
This article was published in print as "I Know How You Feel."