ADVERTISEMENT
See Inside December 2008

The Science of Finding a Face in the Crowd

Discrete brain sections form a dedicated network to recognize faces

As we walk along a city street, it takes no effort to recognize the face of a friend in the crowd. But the ease of the feat masks its cognitive complexity—all faces have eyes, noses and mouths in the same relative place and can bear an array of emotional expressions. For decades, scientists have debated the basis for our facility with faces: either human brains evolved specialized face-processing machinery, distinct from regions that deal with other objects, or they process all objects using an expansive, multipurpose network, merely developing an expertise for faces. Two experiments have now clarified this perennial dispute by uncovering a distinct network that is indeed dedicated to faces.

In the late 1990s brain-imaging studies revealed that discrete regions of the temporal lobe—a section of the human brain important for object recognition—fired up more strongly when people looked at faces than at any other thing. It was unclear, however, whether these regions actually contained cells that were specifically triggered by faces or whether they responded more broadly—activated by, for example, any object related to people or by something that required attention to detail.

A few years ago Doris Tsao and her then colleagues at Harvard Medical School addressed this matter. She located dedicated “face patches” in monkeys and discovered that these patches were packed with neurons that responded only to faces. “We demonstrated that they are highly specialized regions,” says Tsao, now at the University of Bremen in Germany. “But we still didn’t know how they worked—whether each patch was independent or whether they were all involved in a unified circuit.”

So Tsao forged ahead, using a technically impressive combination of brain imaging and single-cell stimulation. She and her graduate student Sebastian Moeller used electrodes to prod neurons in specific face patches, while observing the rest of the brain with functional magnetic resonance imaging (fMRI). Earlier this year they reported finding that the face patches were tightly and specifically interconnected: stimulation of a face patch activated other face patches almost exclusively, whereas stimulation outside a face patch activated only nonface regions.

“This really blew me away,” says Margaret Livingstone, a neurobiologist at Harvard Medical School, who oversaw Tsao’s earlier work. “The connectivity between different face patches is incredibly precise, face patch to face patch, suggesting that this is a really special system that’s got its own anatomy, completely separate from all other objects.”

Tsao then cast her eye on the frontal lobe, which turns sensory data into goal-directed behavior. “We don’t just perceive faces—we respond to them,” she explains. “We determine their emotional expression, store them in our memory, categorize them as friend or foe.” So face patches could be in the frontal lobe, she thought.

Using fMRI, Tsao found three discrete face patches. One patch was in the orbito­frontal cortex, which evaluates emotions and social behaviors. Further testing revealed that emotional faces excited this patch more so than neutral faces, indicating that it might have a specific role in interpreting emotional expressions. (In contrast, the face patches in the temporal lobe did not respond any differently to emotional faces.) Indeed, injury to the frontal lobe can leave victims able to recognize people but unable to assess their mood.

Tsao now hopes to determine how each patch contributes to facial processing. She surmises that they may form a functional hierarchy—for example, one patch may detect faces, and then other patches chime in to report detection of, say, male faces or surprised faces. She strongly suspects these later patches may communicate with the medial temporal lobe, a region where, in 2005, Christof Koch of the California Institute of Technology discovered neurons that responded exclusively to specific individuals, such as actor Halle Berry. Tsao’s findings hint at the step-by-step processing that results in neurons that can encode an entity as complex as a particular person.

Rights & Permissions
Share this Article:

Comments

You must sign in or register as a ScientificAmerican.com member to submit a comment.
Scientific American Back To School

Back to School Sale!

12 Digital Issues + 4 Years of Archive Access just $19.99

Order Now >

X

Email this Article

X