A group of happy people exits the lobby of the Luxor Hotel and climbs aboard a sightseeing bus, excited to begin a second day touring Las Vegas. The men and women chat and laugh, poking fun at one another about events that happened the night before. But it is remarkably quiet. Only their hands are moving as they look at their partners, their faces and body positions emphasizing their words. The other passengers on the bus sit there awkwardly, surprised to be excluded from the energetic conversations. It is then that they realize how deaf people must feel when they are among those who hear.

Every aspect of verbal communication is possible with sign language: expressing joy, conveying anger, telling tales, trading jokes. The discourse follows the same logical principles as spoken language. Yet it has its own syntax, semantics, rhetoric and irony, which involve far more than just the position of fingers on a hand: hand gestures, facial expressions and body postures all add to the repertoire. Furthermore, just as Spanish differs from Swahili, and American English differs from common English in Britain, American Sign Language (ASL) differs from Danish Sign Language and also from British Sign Language. Sign languages even have their own dialects and accents, analagous to a Bostonian's clip or a Texan's drawl. There is sign poetry, and there are even a few sign choirs.

More than one million Americans are completely deaf, but their communication is rich in complexity. ASL is said to be the fourth most common language in the U.S. For decades, however, the hearing world looked on sign language as a kind of pantomime and often ridiculed the people who used it. Only recently have linguists learned to appreciate sign language's own intricate grammar. And only more recently have neuroscientists begun to determine how the brain handles the task. The surprise is that sign language is processed by the same brain regions that understand and generate spoken language, even though vision and hand movements are used in one format, and hearing, vocal cords and lip movements are used in the other.

Cultural Suppression
For a long time, brain scientists did not bother to investigate sign language, because they adhered to the same false assumptions made by the general public. Chief among them was that sign gestures were merely primitive, symbolic attempts to represent objects or actions--substitutes for spoken words. That misconception in turn led to the supposition that the same sign language was used throughout the entire world--if a country allowed one to develop at all.

Social and even political repression of deaf people was incredibly strong during the 1800s and much of the 1900s. In Germany, for example, German Sign Language was banned in schools for the deaf right up to the 1980s. This was perhaps the most extreme case of a trend that had held sway across Europe for a century--that deaf people, supposedly for their own good, should be integrated into the hearing community. Preference for this "oral method" meant that schools, and society in general, suppressed the use of sign language.

The outcome was horrible for deaf students. Because they received no feedback from their ears, they could not control the sounds they made with their mouths nor improve their speech through practice. Learning to talk was torture, success was limited, and the upshot was that little other learning took place. Students who graduated from schools for the deaf were condemned to menial jobs, and some were treated as though they were mentally retarded. Without their own linguistic culture, they lost most of their ability to communicate with others, which eroded their self-esteem and blocked almost any chance for social or economic advancement.

Similar patterns took place in the U.S., with scattered periods of self-determined advancement by deaf people alternating with the hearing world's imposed agenda. The damage from this paternalistic treatment runs deep; even today it is not uncommon for average people to look on their fellow citizens who are deaf or mute as somehow inferior.

Yet the U.S. may have subtly benefited from Europe's heavy-handedness. One linguistic expert and educator decided to leave his continent and show the world what could happen if sign language was indeed allowed to develop. In 1816 a French teacher named Laurent Clerc emigrated to America and soon co-founded the country's first school for the deaf, in Hartford, Conn., with Thomas Hopkins Gallaudet. Clerc began teaching French Sign Language (FSL) to Americans, who mixed it with the more rudimentary forms of local, natural sign language they had cobbled together. Although cultural prejudice prevailed, the language did grow. Today's ASL is unique, yet it is far more "French" than spoken American English is. Modern ASL and FSL share a substantial amount of vocabulary and certain grammatical conventions, and yet they are not both understandable by a person who has been trained in only one of them.

It is not surprising, then, that the sign languages and spoken languages native to any given country have developed very differently; they often have radically divergent histories. ASL and British Sign Language differ far more than spoken American and British English do.

Grammar in Hand
Although it is true that the deaf community uses a few signs that "look" like the item they describe, the vast majority of signs are abstract and conventionalized. Some signs have changed throughout their usage, and new ones are emerging today--the same phenomena that drive spoken languages.

As linguists finally began to appreciate the multifaceted nature of sign language, they began researching its conventions. And as they found many traits that were similar to those of spoken languages, neuroscientists began to wonder if the brain processed the two forms of communication in similar ways [see top box on next page].

In some sense, sign languages are more complex than spoken languages. Anyone who speaks English or Danish or Swahili links one sound to another, building syllables, words and sentences. The formation is linear. Sign languages, however, function in three dimensions. A hand is held in a particular position while the fingers create a specific "hand shape." Each dialect has a catalogue of hand shapes, just as each spoken language has an inventory of phonemes. But in addition to hand shape, there are hand motions, facial expressions, shape of the mouth and movement of the entire body. All these facets operate simultaneously. Using a single verb gesture, a signer can deliver three, six or even nine different items of information. Visual patterns are the bearers of meaning.

For example, the spoken word toast contains the sounds for "t," "oh," "s" and "t," which can deliver that meaning only when they are in that order. Deaf signers do not use just a finger alphabet but (in ASL) express the concept of "toast" by forming a V with the first two fingers of the right hand, touching it to an upturned left palm, and then circling it down to touch the back of the left hand. Taking away part of a sign, or replacing it with another element, creates a different word. Touching the same V to the right cheek and twisting it inward means "vegetable." Changing the movement by holding the V in front of the chest and squeezing the two fingers together gives the sign for "cut."

Space Adds Speed
Many sign languages also utilize spatial dimensions to express the relations among subjects and objects in a sentence, interactions between persons, and many other grammatical and content concepts. Objects can also be merged with verbs. For example, the ASL sign for "pick up" changes according to its object: picking up a marble, a cup or a stone is portrayed in different ways.

Variations in gesture also deliver information about an object's qualities or a movement's details. Denoting a large book versus a small one is done with the extent of hand movement when making the sign for "book." In some ways, spoken language cannot express as much. Saying "I stood on the bed" provides no information about which leg you are standing on, which side of the bed you are on, or how quickly you stood up. This can be accomplished in one movement using sign language. These differences can sometimes make it difficult for translators to convey spoken dialogue in signs, or the other way around.

Interactions with people are also presented spatially. In many languages, including ASL, a signer defines a point near her body as the standing position of a person she is talking about, and the signs about that person will be presented there. For the sentence "Emily visited Nick," the signer will define one spot for Emily and another for Nick. The Emily spot will then symbolize that person for the rest of the story. To sign "Emily visited Nick," all the speaker needs to do is make the sign for visited and move it from the Emily spot to the Nick spot.

In this way, sign language can be very economical. Instead of filling each sentence with redundant information by repeating "Emily," "Nick" and "I"--as many spoken languages require--space takes over this grammatical function. Such traits help sign language overcome one of its main disadvantages: hands cannot "talk" as fast as mouths can speak.

Spatial orientation also means that signs may acquire certain meanings in context. For example, it could be difficult to break into a short video clip of a signer in the middle of a story and be able to follow what was being conveyed. In some ways, sign language and its narratives may be more closely intertwined than is the case with spoken or written language.

Questionable Faces
Facial expressions are just as important to grammar in virtually all sign languages. Many hearing people on seeing the animated faces of individuals who are signing mistakenly believe the people are expressing strong emotions. But facial expressions are much more than frosting on the cake. Without such mimicry, many signs are just wrong. For example, in ASL, an open right hand is thrust forward to indicate the future; the facial expression is neutral. But the same motion, made with puffed cheeks, means the distant future, and a wincing expression means the very distant future. Body language may come into play, too. The hand motion for "driving" a car is two extended hands moving as if they were turning a steering wheel. For driving a truck, the shoulders are tipped back and the head is pitched upward slightly. Driving a race car involves a furrowed brow.

The changing intonation of a spoken question has its counterpart in sign language, too. ASL signers signal a question by raising the eyebrows and widening the eyes. They would ask a question of a specific person within a group by tilting their body toward that person while signaling with their eyes and eyebrows.

So-called sign markers complete the repertoire. For example, in ASL the written past tense ending "-ed" is conveyed by pushing a flattened right hand backward over the right shoulder.

Learn Early
Understanding sign language's intricacies makes it apparent that learning this form of communication may be as difficult as learning a foreign spoken language. That is why the old suppression of signing, especially in children, was so damaging and must never be repeated.

Fewer than 10 percent of all hearing-impaired children have two hearing-impaired parents. A deaf child born to deaf parents who already use ASL will acquire the skill naturally, just as a hearing child picks up spoken language. But households with one or two speaking parents must begin using and teaching sign language from a newborn's early days. As with any language, interaction between a baby and parents is vital to acquisition.

Hearing parents who do not know sign language must learn it along with their child and should introduce it as early as possible. Research shows that the first six months of a child's life are the most crucial to development of oral language skills, and newer studies show the same applies for visual language. The earlier exposure begins, the more competent the child will become; tests indicate that native signers of ASL are consistently more accomplished than individuals who learned ASL later. Brain-imaging studies also show that people who have grown up speaking and later learn sign language process visual imagery somewhat differently from individuals who were raised with ASL from birth, suggesting that sign language enhances certain visual-processing functions.

All these insights, of course, mean that early screening for hearing loss should be routine. Discovering deafness early and exposing infants to sign language will help them lead lives that are full of conversation. The happy deaf people on the tour bus in Las Vegas were not merely communicating. They were demonstrating just how rich and unique their language and lives can be.

(The Author)
JENS LUBBADEH is a biologist and science writer in Hamburg, Germany.

(Further Reading)

  • The Art of Sign Language. Christopher Brown. Thunder Bay Press, 2003.
  • Children Creating Core Properties of Language: Evidence from an Emerging Sign Language in Nicaragua. Ann Senghas, Sotaro Kita and Asli zyrek in Science, Vol. 305, pages 17791782; September 17, 2004.
  • National Association of the Deaf: www.nad.org
  • An online video dictionary of American Sign Language can be seen at http://commtechlab.msu.edu/sites/aslweb/browser.htm