Skip to content Skip to main navigation Report an accessibility issue

News

Hacking for a Cause

 

Picture of a smart phone and device

“Alexa, wake me up at 7 a.m. tomorrow,” you say out loud while driving. As you arrive home, your car connects to the home wireless network, the garage door opens, and your favorite radio station starts playing in the kitchen. The next morning, the coffeemaker starts brewing at 6:55 and the lights gradually turn up to an optimized setting to start another day.

This hyperconnected life-simplifying scenario is already a reality for people who own smart devices connected to devices with voice-enabled personal assistants like Amazon’s Alexa, Apple’s Siri, and Google’s Assistant. What’s also a reality is the potential for these systems to be used for hacking and data theft.

Assistant Professor Jian Liu of the Min H. Kao Department of Electrical Engineering and Computer Science at the University of Tennessee, Knoxville, is working to improve the security of personal assistant systems by—well, hacking them.

“Our research can help make companies more aware of what the drawbacks are in their software,” Liu said. “In turn, that can help guide them as they make the next round of devices, making them more secure.”

Related: Watch and hear the research behind Liu’s device hacking

Jian Liu

Jian Liu

The growing field of electronic personal assistants increasingly relies on voice recognition for security. While it may seem that the system recognizes your voice in the same way that another person would, the machines rely on deep neural networks (DNNs) for speech and voice recognition.

While DNN systems are promising, a key setback is their inherent vulnerability to adversarial machine learning attacks. Much as the human brain uses neural networks to act, react, and perform tasks, DNNs use algorithms and computations to help a device make decisions without relying on direct programming from a person, learning as it goes and gaining the ability to operate with autonomy. Such systems could allow for a greatly expanded role for autonomous devices, ranging from everyday helpers like personal assistants to machines that are able to perform tasks in environments inhospitable to humans.

Liu and his team have found that signals hidden within otherwise normal-sounding content—like music, birds singing, or phone notification sounds—might unlock and control devices, meaning your data could be accessed without you ever knowing someone was making the attempt.

“It might be something that looks as seemingly innocent as someone playing music on their phone as they walk by or even those unnoticeable environmental sounds like birds singing, car horns, or HVAC noises, but in reality, there is an underlying signal playing that is attempting to hack into your own electronics,” Liu said. “The range of what someone might do once they’ve accessed could be anything from having your connected home devices open your garage door to shopping with your information. The scope is so big that it can become dangerous in a hurry.”

To help make sure that doesn’t happen, Liu and his team are embedding various sounds designed to attack devices within other sounds and broadcasting them to determine the corresponding effect on the devices. Known as adversarial perturbations, these sounds will help the team figure out the best way to exploit each device—which, in turn, will let them know the range of vulnerabilities that need to be improved and protected against.

Their paper “AdvPulse: Universal, Synchronization-free, and Targeted Audio Adversarial Attacks via Subsecond Perturbations” was recently accepted for presentation at the Association for Computing Machinery’s annual flagship Conference on Computer and Communications Security (ACM CCS 2020). Co-authors include two UT doctoral students, Zhuohang Li and Yi Wu, and Professor Yingying Chen and Assistant Professor Bo Yuan from Rutgers University.

The team’s goal is to develop and recommend measures that can be used by companies like Apple and Google to limit or completely disable such attacks, so when you wake up each morning you can enjoy your freshly brewed coffee with peace of mind.


Contact

David Goddard, (865-974-0683, david.goddard@utk.edu)