Being accustomed to the sound of a person's voice makes it easier to hear what she is saying. New research shows that simply being used to watching somebody's soundless lip movements has the same effect.

A research team at the University of California, Riverside, asked 60 volunteers to lip-read sentences from silent videos of a person talking. Then they listened to an audiotape of sentences spoken in a background of noise and were asked to identify as many words as they could. Half the volunteers listened to the same person they had just watched, and the other half heard a different talker. Those who lip-read and heard the same person identified words in the muffled sentences better than those who lip-read from one talker and listened to another.

The findings suggest that our brain can transfer familiarity with the way a person moves his mouth while he talks into familiarity with the sound of his voice, “even if we have never actually heard that voice,” says lead researcher Lawrence Rosenblum.

Although scientists have long known that visual signals play a key role in speech recognition, how the brain blends the two stimuli is still a mystery. Some imaging studies suggest that the auditory cortex is involved in the processing not only of audio but also of visual speech information.

But our brain constructs the words we ultimately perceive from more than just sounds and lip movements; our expectations come into play as well. Many studies have shown that people hear speech differently depending on their beliefs about the talker's identity—his or her social or ethnic background, for example—says the University of Chicago's Howard Nusbaum: “Listeners' expectations can be just as powerful as acoustic cues.”