Voice assistants allow smartphone users to snap a photograph or send a text with a spoken command. Yet they also potentially let hackers do the same things by bombarding the device’s microphone with ultrasonic waves (sounds with frequencies higher than humans can hear). Researchers have previously demonstrated how they could trick a phone by sending these waves through the air, but the approach required proximity to the victim and was easily disrupted by nearby objects. Now a new technique called SurfingAttack can send ultrasonic waves through solid objects. It could enable potential snoops to avoid obstacles and perform more invasive tasks—including stealing text messages and making calls from a stranger’s phone.
To test this method, researchers hid a remotely controllable attack device on the underside of a metal tabletop, where it could send ultrasonic waves through the table to trigger a phone lying flat on its surface. “We are using solid materials to transmit these ultrasonic waves,” says Qiben Yan, a computer scientist at Michigan State University. “We can activate your voice assistant placed on the tabletop, read your private messages, extract authentication pass codes from your phone or even call your friends.” The experiment, described in a paper presented at the 2020 Network and Distributed System Security Symposium (NDSS) in February, worked on 17 popular smartphone models, including ones manufactured by Apple, Google, Samsung, Motorola, Xiaomi and Huawei.
Voice assistants typically pick up audible commands through the microphone on a smart speaker or cellular device. A few years ago, researchers discovered that they could modulate voice commands to the ultrasonic frequency range. Though inaudible to humans, these signals could still work with a device’s speech-recognition system. One ultrasonic hack, presented at a computer security conference in 2017, used these “silent” commands to make Apple’s assistant Siri start a FaceTime call and to tell Google Now to activate a phone’s airplane mode. That kind of intrusion relied on a speaker placed at a maximum of five feet from the victim’s device, but a later ultrasonic technique presented at a networking conference in 2018 increased the distance to about 25 feet. Still, all of these techniques sent their signals through the air, which has two drawbacks: It requires visibly conspicuous speakers or speaker arrays. And any objects that come between the signal source and target device can disrupt the attack.
Sending ultrasonic vibrations through solid objects allows SurfingAttack to avoid these issues. “The environment is affecting our attack a lot less effectively, in our scenario, than in previous work that’s over the air,” says Ning Zhang, a computer scientist at Washington University in St. Louis. With airborne ultrasonic waves, “if somebody walks by, say in the airport or coffee shop, that signal would be blocked—versus, for our attack, it doesn’t matter how many things are placed on the table.” In addition, the researchers note, their method is less visible and consumes less power than an air-based speaker because its ultrasonic waves emanate from a small device that sticks to the bottom of a table. Yan estimates it could cost less than $100 to build. Another feature of SurfingAttack is that it can both send and receive ultrasonic signals. This arrangement lets it extract information—such as text messages—in addition to ordering the phone to perform tasks.
“I think it’s a really intriguing paper, because now [such hacking] doesn’t need in-air propagation of the signals,” says Nirupam Roy, an assistant professor of computer science at the University of Maryland, College Park, who did not contribute to the new study. He also praises the measures the researchers took to ensure that as the ultrasonic signal moved through the tabletop, the material did not produce any noises that might alert the phone’s owner. “Any vibrating surface, even the signal that is flowing through the solid, can leak out some audible signal in the air. So they have shown some techniques to minimize that audible leakage and to keep it really inaudible to the [phone’s] user.”
To avoid falling prey to bad actors, the researchers suggest phone owners could limit the access they give their AI assistants. What an attacker can do “really depends on how much the user is depending on the voice assistant to perform day-to-day activities,” Zhang says. “So if you give your Siri access to your artificial pancreas to inject insulin, then [you’re in] big trouble, because we can ask it to inject a ridiculous amount of insulin. But if you’re a more cautious person and say, ‘Hey, I only want Siri to be able to ask questions from the Internet and tell me jokes,’ then it’s not a big deal.”
Another way to prevent SurfingAttack, specifically, would be to swaddle one’s device in a squishy foam case or to place it only on cloth-covered surfaces. These materials muffled the ultrasonic signal more effectively than common rubber phone cases, which failed to prevent successful hacks. A more effective fix, however, might be to simply avoid setting one’s phone down in public spaces. “A lot of people are just placing their phones on the table without taking care,” Yan says. “I just came from a Chicago airport. I saw a lot of people putting their phones [down to charge] on a metal table, unattended.”
But Zhang is less concerned about ultrasonic devices being planted at random public tables, because this approach would require a lot more work than, say, sending a phishing e-mail. “From my previous experience in the industry, an attack generally takes a lot of effort,” he says. “And if it’s not worth it, nobody would do it.” Hackers would not bother to develop and program SurfingAttack devices unless they were highly motivated to extract information from a specific individual, Zhang suggests, “so I don't think we’ll see a lot of people attaching ultrasound speakers below [a coffee shop] table.”
Whether or not SurfingAttack makes its way into the daily world, its existence could serve as a warning to developers of voice assistants. As Roy explains, experiments like this serve to “reveal a new kind of threat.” Such studies usually rely on experiments that take place in labs, which means the environment is more controlled than it would be in real life. But complete realism is not the goal. Instead research like this aims to expose vulnerabilities in principle—so developers can fix them before hackers find them. “Researchers who are working to reveal these kinds of attacks earlier, before the attacker, they’re doing a fantastic job to identify these loopholes in our system,” he says.