Justin Birch lost his ability to speak in 2003 as the result of a brain aneurysm, but these days he is such a facile conversationalist he can ask for his favorite dinner—Ruby Tuesday Minis with fries and a raspberry iced tea—as well as harass his opponents after he defeats them at Texas Hold 'em.

Of course, Birch, who turns 34 this week, is a polite resident of Cape Coral, Fla., who would never intentionally annoy anyone, but it is nice to have the same speech options as those who can speak on their own. Birch (who can walk with the aid of a cane) achieves this via an assistive communication device that allows him to tap out messages on a touch screen using a stylus. After his messages are composed, the portable pad uses special software to announce his thoughts in a simulated tones that sound similar to Justin's own pre-aneurysmal voice.

Eight out of 1,000 people—roughly 2.5 million in the U.S.*—cannot use their voice to communicate due to a variety of reasons, whether it is a birth condition such as autism or Down's syndrome, the onset of amyotrophic lateral sclerosis (ALS—aka Lou Gehrig's Disease), or a traumatic event such a stroke or brain injury, says Jim Shea, vice president of marketing for Pittsburgh-based DynaVox Mayer-Johnson, which makes a range of assistive communication devices, including the V system that Birch uses.

In a sign of things to come, DynaVox and other makers of assistive communication devices are moving beyond Windows-based systems like the V to emulate smart phones like Apple's iPhone that integrate dynamic touch screens, wireless Internet connectivity and music players into a single portable package. The first of this next generation is the Xpress, a handheld communicator DynaVox introduced today.

These devices have come a long way from the gadgets first introduced in the late 1960s that allowed those who were mute to type out messages one letter at a time using a keyboard. For those without the use of their limbs, the typing was accomplished by watching a lighted display and puffing into a tube or touching a switch—depending upon the user's capability—when the desired letter was highlighted on a screen. British theoretical physicist Stephen Hawking, one of the most prominent users of such technology, communicates via a DECtalk DTC01 voice synthesizer developed by Digital Equipment Corporation in the early 1980s. Disabled by a motor neuron disease, Hawking uses his cheek to depress a switch that helps him choose letters and words when communicating.

From these painstakingly slow devices sprang subsequent generations that made use of newly developed technologies, including computers, touch screens and even optics that could determine what one wanted to say by tracking a user's eyes as they moved across a digital display.

 

*Correction (8/11/09): This article originally stated that one out of eight people—roughly three million in the U.S.—cannot use their voice to communicate.

The Xpress, which costs about $7,500 and begins shipping at the end of this month, features a 12.7-centimeter touch screen, weighs in at less than one kilogram, and offers Internet access via Wi-Fi. To help make the Xpress more rugged, it has an eight-gigabyte flash memory drive (rather than a hard drive with moving parts that could break if the device is dropped) and a magnesium outer case.

Different needs, different devices
Makers of assistive communication devices, in order to be responsive to the range of abilities among potential users, provide different devices that cover a range of patient capabilities, Shea says, adding, "A person who is nonverbal would see a licensed speech and language pathologist who would make determination which technology is a fit for them."

"We first look at whether someone is at a disadvantage because they can't communicate effectively," says John Costello, a speech language pathologist at Children's Hospital Boston and director of the hospital's Augmentative Communication Program. A doctor will then consider the person's motor capabilities, hearing, vision, education and social environment, along with their desire to communicate, and attempt to match these with available technology. The technology is advancing at such a pace, he adds, "that even next year we'll look back…and say, 'I can't believe they didn't have that feature.'"

Costello, who has been given a demo of the Xpress, says DynaVox appears to have succeeded in making "something that looks cool and incorporates as many features as possible in one device." Although he wants more time to fully evaluate the Xpress, Costello says the new device might be appropriate for patients (like Birch) who have difficulty speaking but are ambulatory and like to move around. Other patients, particularly those just starting therapy, however, don't need a device as sophisticated as the Xpress. "They don't need the Ferrari yet, they need to learn how to drive first," he adds. "This is not an entry-level tool for a person who's not used to using technology."

Some of the more commonly used augmentative communication devices made by DynaVox, Prentke Romich Co. in Wooster, Ohio, Words+, Inc., in Lancaster, Calif., and others are essentially Windows XP computers with a variety of input interfaces, including touch screens, keypads or eye scanners. There are at least dozen other companies that make a variety of augmentative communication devices.

Learning to live with a synthesized voice
Birch has been using digital speech devices since 2006, when his speech therapist, Lucinda Diggs, introduced him to DynaVox. Birch became so adept at using DynaVox's devices that the company asked him to test its Windows XP–based V before it was introduced in January 2007.

Birch chose a voice that came standard with the V, although it is possible to record and add new voices if those available are not appealing. Birch's V sounds a bit like something you might hear when calling a movie theater for show times but with two important differences: The sound quality is clear and the voice itself has a consistent sound from word to word (rather than sounding like a string of words cobbled together from different digital voices). "I just like it," he says, "because it sounds close to my real voice."

To save time and energy when he is out at restaurants or in other common social situations, Birch has created lists of words and phrases he can summon via the V's touch screen whenever appropriate. "I have a page that has all of the information that I use when I go to a restaurant to eat," he says. "I have a list of food that I like to eat and how I like it prepared."

When Birch needs to compose a more specific question or response (one not on his lists), he initiates the phrase, "Please be patient with me while I am composing what I want to say," in order to buy himself some time when responding to a situation.

Birch also has a string of phrases he can tap when playing Texas Hold 'em poker, a hobby he has taken to in recent years. He can call, raise and fold via his V. He has even downloaded sound bites from the Web—including Homer Simpson's infamous "Woo hoo!" victory cry—that he can play to express feelings when words fall short. "People usually get a kick out of that," he says.

Of course, things don't always go according to plan. During one poker tournament, he defeated another player at the final table and, as she got up to leave the table, his V said, "Good game, you bitch." Birch claims this was an accident, as the two phrases were right next to each other on his screen. "I told her that I hit that button by mistake," he says. "She had a good sense of humor."

For more serious endeavors, Birch uses his V to compose presentations that he can give to different groups of people. He recently submitted a proposal for a speech he would like to make at the upcoming 2010 Assistive Technology Industry Association conference in Orlando.

DynaVox's new Xpress intrigues Birch although he hasn't been able to evaluate a prototype. "They are heading in the right direction," he says. "The smaller the better."