Kids diagnosed with autism often struggle with making eye contact as well as recognizing emotions and social cues exchanged with other people. A handful of tech entrepreneurs hope Google Glass could become a tool to help these children better identify conversational nuances in real time—and one such entrepreneur received a vote of confidence in his work Tuesday, taking home the $15,000 “Cure it!” 2016 Lemelson–MIT Student Prize, which rewards technology-based health care inventions.
Stanford University graduate student Catalin Voss’s Autism Glass project certainly fits the bill. The 20-year-old inventor, working in the lab of pediatrics associate professor Dennis Wall at the Stanford School of Medicine, has developed emotion-recognition software for Google Glass designed to tell an autistic child wearing the device whether a person the child looks at is happy, sad or angry.
One of the most effective autism intervention techniques so far has been behavior-focused therapy in which therapists use flash cards to promote learning and language. “But there are just not enough therapists to meet increasing demand, and flash cards are removed from real life,” Voss says. The Autism Glass would give kids real-time social cues in a daily life setting while providing parents and psychologists data to better understand autism. One in 68 children in the U.S. alone is now diagnosed with the condition, according to a 2014 report from the U.S. Centers for Disease Control and Prevention.
View a slide show featuring all of the 2016 Lemelson–MIT Student Prize winners.
“The promise of using Google Glass is that it’s transportable out of a therapeutic setting into an everyday social situation, whether it be school or a birthday party,” says Rebecca Landa, director of the Center for Autism and Related Disorders at the Kennedy Krieger Institute.
Voss, who was born in Heidelberg, Germany, had already built and sold numerous iPhone apps in his own country when, at 15, he got a job making mobile apps for Silicon Valley start-up PayNearMe. As a 17-year-old computer science freshman at Stanford, he created vision software that tracks faces and measures positions of the eyes and mouth to identify smiles, frowns or smirks as well as calculate emotion. Voss later launched the start-up Sension to develop an app based on his software that could measure a person’s level of engagement. He sold it to Japanese company GAIA System Solutions—which is using the technology to create a car safety feature to alert sleepy or distracted drivers—and then switched his focus to autism.
The Autism Glass setup consists of a Google Glass used in conjunction with a smartphone, which runs software that analyzes data from the head-mounted display and provides feedback to the user. The system also records video for parents to review and help kids improve their learning. Voss’s goal is that after a limited learning period, kids would not need the device.
A preliminary study involving 20 children showed that they were willing to wear the device and interact with it, and an in-home trial that started in January is now giving Voss data to fix bugs and refine the software. So far there has been encouraging feedback from the families of 10 enrolled users. “Teachers are noting that they are making better eye contact and staying engaged,” says Voss, who is now toying with different feedback mechanisms including color, emoticons or audio cues. He is also working on perhaps the most critical challenge: how to help kids respond to the emotions they detect and identify.
Voss wants to eventually design a system that can track multiple faces engaged in conversation. He wants to make it learn and adapt to a particular person’s facial expressions, say the child’s parent or therapist. And he wants to make it more nuanced. “We dream of telling how anxious someone is,” says Nick Haber, one of Voss’s colleagues and a Stanford postdoctoral student.
With validation from in-home trials involving 100 children, Voss plans to license the technology or form a start-up himself. The software could be adapted for wearables other than Google Glass, he says, or could simply be combined with a button camera and earpiece. This versatility is important given Google’s unclear commitment to its Glass technology, which was put on the shelf in 2014 only for a new version to be seemingly resurrected in a U.S. Federal Communications Commission filing last December.
Brain Power, LLC, a start-up in Cambridge, Mass., is also trying to use Google Glass to help autistic children. Founder Ned Sahin, a cognitive neuroscientist, calls his system a “wearable classroom.” Through various interactive games and rewards, it helps kids identify emotion, make eye contact, learn language and control repetitive behavior such as rocking. Sahin says the system has already been tested on 200 people ages four to 24.
Some experts believe Google Glass or other wearables could have a major advantage over existing systems using iPads and computer games. “It’s getting them to look up and focus on the face…a behavior that’s very difficult to learn,” says Nancy Tarshis, a speech language pathologist at the Albert Einstein College of Medicine’s Children’s Evaluation and Rehabilitation Center.
Still, kids might still have trouble translating the specific skills they’ve learned with the Glass to a more general and dynamic real-world setting, says Krieger’s Landa. Like any other technology intervention, it could work for some kids and not others, she adds.
So although the technology is incredibly exciting, “the tool is one piece of the puzzle,” Tarshis notes. “It’s not a miracle. It’s got to be put in context of a good educator or a good therapist.”