Mobile devices have become incredibly popular for their ability to weave modern conveniences such as Internet access and social networking into the fabric of daily life. For people with disabilities, however, these devices have the potential to unlock unprecedented new possibilities for communication, navigation and independence. This emergence of mobile “assistive” technologies, influenced heavily by the passage of the Americans with Disabilities Act (ADA) 25 years ago, marks a major step forward for people with disabilities.

The U.S. Congress passed the ADA in July 1990 as a civil rights law to protect people with disabilities from discrimination. The act requires that businesses, schools and government agencies must follow certain requirements to ensure people have equal access to their services and facilities. Over the past quarter century the ADA has prompted researchers and engineers to consider the needs of persons with disabilities as they develop new products and services, says Harry Hochheiser, an assistant professor of biomedical informatics at the University of Pittsburgh.

Today more than one in five adults has some form of disability, according to a report released last week by the U.S. Centers for Disease Control and Prevention. Fortunately, nearly all computers and mobile devices have in recent years integrated accessibility features that facilitate their use—among them speech recognition; speech to text or text to speech; and captioning.

Examples of such features are not hard to find. Apple’s VoiceOver software is a screen reader built into its Mac and mobile operating systems. VoiceOver for iOS reads aloud information from iPhone or iPad screens as the user passes a finger over icons and text. Google offers similar capabilities on Android mobile devices through its TalkBack feature. Both Apple and Google mobile devices also work with Bluetooth-connected braille keyboards. In addition, “assistive touch” apps available on Apple, Google Android and other mobile devices help users unable perform certain gestures, such as multifinger swipe required to use their touch screens.

Some of the most exciting assistive technologies are still in their early stages, either in the lab or in search of funding to take the next step. “If anything, I think the big issue is the transfer—so many good ideas in the lab that demonstrate the possibilities of assistive technologies don’t make it into practice,” says Hochheiser, who is also chair of the Association for Computing Machinery’s Accessibility Committee. For many of these projects, the key to success will be finding ways to tightly integrate new assistive features at the operating system level so they can be used seamlessly with existing devices, he adds.

The following five technologies have the potential to further transform today’s popular Apple iOS, Google Android and other popular mobile devices into tools that can deliver even greater levels of freedom and independence to people with any number of disabilities.

 

 

5 Mobile Technologies Help Level the Playing Field for People with Disabilities [Video]

Next »

1. Bar-Code Readers



Smartphone-based barcode scanner apps are one way mobile devices can help people with vision impairments do their shopping. Image: Courtesy of the Smith-Kettlewell Eye Research Institute

10.

Bar-Code Readers:
Bar-code readers, in use since the early 1970s, are no longer the exclusive domain of warehouse workers and store clerks. Apple’s App Store and Google Play have dozens of apps that turn smartphones into bar-code scanners to help with inventory management, data collection and even price-checking. More recently app developers have begun to think of these scanners as a way to help people with visual impairments identify items as they work or shop. Digit-Eyes, for example, lets users of iOS devices create labels that they can affix to different items and then read those labels using the autofocus cameras on their Apple devices. The app can also read a number of the standard UPC (universal product) and QR (quick response) codes found on product labels in stores.

Early versions of these bar-code and QR code scanner apps—including RedLaser, Nokia’s Point and Find, and Realeyes3D—required users to be able to see well enough to locate and home in on the bar code via the smartphone’s screen. In 2009, however, researchers at the Smith–Kettlewell Eye Research Institute (pdf) proposed a smartphone-based scanner that included a computer vision algorithm that could process several frames of video per second to detect bar codes from a distance of about a dozen centimeters. Their proposed scanner could also issue directional information via the phone’s speaker to help a visually impaired user locate the bar code.

By the end of 2012, Smith–Kettlewell researchers James Coughlan and Ender Tekin had developed their “bar-code localization and decoding engine,” or BLaDE (pdf), an Android smartphone app designed to enable a blind or visually impaired user find and read product (UPC-A) barcodes. The BLaDE software is still available although funding for the project has ended and Coughlan and Tekin are no longer actively developing the code. “We have been in contact with several parties over the years who have expressed interest in developing [BLaDE] into a fully featured bar-code reader app/system,” Coughlan says. Swiss Web site CodeCheck.info, working in collaboration with the Swiss Federation of the Blind, used an early version of BLaDE to build an iPhone app to enable vision-impaired people to find and read bar codes. Coughlan says the software has been downloaded 15 times this year but adds that he does not know how extensively people with visual impairments in particular are using BLaDE.

The following video demonstrates how BLaDE works as an aid to those with visual impairments:

Courtesy of the Smith-Kettlewell Eye Research Institute, via YouTube

 

Previous »

Introduction

5 Mobile Technologies Help Level the Playing Field for People with Disabilities [Video]

Next »

2. Refreshable Braille Displays



Researchers are developing a digital braille display similar in size and thickness to a tablet computer. Credit: Courtesy of PhotoDisc/Getty Images (MARS)

1.

Refreshable Braille Displays:
Refreshable braille displays use electromechanically controlled pins, as opposed to the lights in a conventional computer monitor, to convey information. They do this using software to gather a Web page's content from the computer's operating system. The software converts the words and images into a digital version of braille and then represents the text with a touchable row of finger-size rectangular cells lined up side by side like dominoes. Each cell has six or eight small holes through which rounded pins can extend and retract with the help of piezoelectric ceramic actuators to represent various braille characters. Each time a person reads the row of braille with his fingers (left to right), the pin configurations refresh to represent the next line of a Web page's text, and so on.

Concerned that existing, one-row-at-a-time braille displays were causing vision-impaired people to miss out on much of what the Web had to offer, a team of North Carolina State University researchers in 2005 began developing a display that could translate words and images into tactile displays consisting of up to 25 rows, each with 40 cells side by side. By 2012 the researchers had formed a start-up called Polymer Braille and received grants from the National Science Foundation (NSF) and the U.S. Department of Education to develop their new braille display. Plans call for it to be similar in size and thickness to a tablet computer and use an electroactive polymer film that moves in response to applied electricity. This movement raises and lowers pins to create arrays of dots representing braille letters, graphical information or mathematical equations.

Last year the NSF awarded Polymer Braille an additional two-year grant worth nearly $800,000 to help the company build a fully functioning prototype braille display by the middle of 2016. The key to Polymer Braille’s success will be creating a display that costs about $1,000, several thousand less than anything currently available, says Peichun Yang, company president, CEO and former North Carolina State student. Tactisplay Corp. in South Korea, by comparison, recently began selling an $8,500 tabletlike braille device. Tactisplay is also planning to sell an even smaller portable device that can capture and analyze digital images and then render them in braille on a handheld display. The Tactisplay Walk (see video below) is expected to cost about $7,000.

Courtesy of Tactisplay Corp., via YouTube

Previous »

2. Refreshable Braille Displays

5 Mobile Technologies Help Level the Playing Field for People with Disabilities [Video]

Next »

3. Wearable Finger Reader



Researchers from the Massachusetts Institute of Technology’s Media Lab, Singapore University of Technology and Design and Nanyang Technological University in Singapore are developing the FingerReader. Courtesy of Massachusetts Institute of Technology

3.

Wearable Finger Reader:
Having to use screen readers and similar apps is less than ideal at times—the software can be inaccurate and slow in converting text into speech, and it provides no help when attempting to read text printed on paper. To help address these needs an international team of researchers is developing a finger-worn device that the vision-impaired can use to turn any text into audio even under dim lighting conditions.

Researchers from Massachusetts Institute of Technology’s Media Lab, Singapore University of Technology and Design, and Nanyang Technological University in Singapore have developed a few different versions of a FingerReader. The first uses two haptic motors—one on top of the finger and another below—that vibrate when a finger deviates from the line of text. A second version uses a musical tone to caution when a finger is not tracking properly. A third variation combines both vibration and sound. The device features a tiny camera and software designed to process video in real time as the finger sweeps from left to right.

The FingerReader exists only as a prototype but the researchers are already considering new configurations and uses. The researchers have conducted experiments thus far on a laptop connected to the FingerReader but plan to create a more portable version that interfaces with an Android smartphone. The technology may also be able to help those with learning disabilities or dyslexia. A separate team of researchers is pursuing still more uses for finger-mounted devices and was recently awarded a patent for their FingerSight sensor that gives visually impaired users information about more distant objects.

The following video shows the FingerReader in action:

Courtesy of Massachusetts Institute of Technology, via Vimeo

Previous »

3. Wearable Finger Reader

5 Mobile Technologies Help Level the Playing Field for People with Disabilities [Video]

Next »

4. Hover Detection/Onscreen Keyboard Augmentation



HoverZoom works on Samsung Galaxy S4 and S5 Android smartphones, enabling the mobile device displays to detect a finger before it even touches the screen. Courtesy of the University of Bremen

4.

Hover Detection/Onscreen Keyboard Augmentation:
Mobile device onscreen keyboards can even frustrate people without physical disabilities. So researchers are experimenting with “hover detection” capabilities that allow mobile device displays to detect a finger before it even touches the screen.

Apple patented a technology for “hover sensitive devices” in 2011 that could detect hand gestures made near the screen. The patent describes the ability to recognize well-known hand or finger movements such as the “okay” sign, a knocking gesture with a closed fist or a grasping motion to issue commands to a device. Not to be outdone, Samsung has included support for its Airview feature, which lets users enlarge text or activate apps without touching the screen, on certain Galaxy devices running Google Android. Google, meanwhile, has been pursuing gesture-based controls via its Project Soli as a way to interact with tiny touch screens such as those found on smartwatches.

Programming mobile devices so they can accurately identify gesturing at a distance from the screen is technologically challenging, according to Frederic Pollman, a researcher at the University of Bremen’s Digital Media Group in Germany. Pollman has been working on the problem for years and developed a mobile app called HoverZoom for Samsung Galaxy S4 and S5 Android devices, both of which include Airview and have the processing power to avoid any delay between a user’s finger movement and the device’s ability to respond to that movement. Pollman says there is not much ongoing development on the project at this time as he writes his dissertation on the research done so far. As newer, more powerful devices integrate Airview or some other type of hover-detection capabilities, there may be more opportunities for people to try HoverZoom, he adds.

 


Courtesy of the University of Bremen

 

Previous »

4. Hover Detection/Onscreen Keyboard Augmentation

5 Mobile Technologies Help Level the Playing Field for People with Disabilities [Video]

Next »

5. Mobile Device American Sign Language (MobileASL)


a health worker in Liberia
Since developing MobileASL (American Sign Language) in 2008, researchers have studied its ability to achieve higher quality video at lower bandwidths—a combination crucial to ASL use via mobile devices. Courtesy of Huntstock/Thinkstock

5.

Mobile Device American Sign Language (MobileASL):
Mobile video telephony apps FaceTime and Skype have become incredibly popular in recent years as network data speeds have increased, enabling smoother video streaming with fewer delays. This is good news for people with hearing impairments who communicate using sign language via mobile video. There is a downside, however, as streaming video generates large amounts of data traffic that can slow mobile networks. The technology becomes expensive if a data plan for a device is limited.

A team of University of Washington and Cornell University researchers worked for several years on a way to compress video so that it can take up less bandwidth and better enable the use of sign-language communication over mobile networks. The idea behind the MobileASL (for American Sign Language) compression project was to relieve traffic burdens from high-speed networks by allowing video to be sent over slower networks so that it could be displayed on less powerful mobile phones and consume less battery power.

Since developing MobileASL in 2008 researchers have been studying its ability to achieve higher quality video at lower bandwidths—a combination crucial to using ASL over mobile devices. The researchers concluded in a study published late last year that MobileASL’s compression algorithms enabled people who are deaf to converse fluidly using ASL via mobile video at bit and frame rates significantly below normal data traffic speeds, says Eve Riskin, a professor of electrical engineering and associate dean of diversity and access at the University of Washington’s College of Engineering. The researchers are no longer actively developing MobileASL, in part because real-time Web video has improved a lot since the project began. Still, Riskin says the technology could be used to further cut the volume of data traffic if a cell phone provider chose to implement MobileASL in their real-time video codec.

The following video includes demonstrations of people communicating via ASL over mobile devices, aided by MobileASL technology:

Courtesy of the University of Washington, via YouTube

Previous »

5. Mobile Device American Sign Language (MobileASL)

5 Mobile Technologies Help Level the Playing Field for People with Disabilities [Video]