X

As voice assistants go mainstream, researchers warn of vulnerabilities

New research suggests that popular voice control platforms may be vulnerable to silent audio attacks that can be hidden within music or YouTube video -- and Apple, Google and Amazon aren't saying much in response.

Ry Crist Senior Editor / Reviews - Labs
Originally hailing from Troy, Ohio, Ry Crist is a writer, a text-based adventure connoisseur, a lover of terrible movies and an enthusiastic yet mediocre cook. A CNET editor since 2013, Ry's beats include smart home tech, lighting, appliances, broadband and home networking.
Expertise Smart home technology and wireless connectivity Credentials
  • 10 years product testing experience with the CNET Home team
Ry Crist
4 min read
Sarah Tew/CNET

In case you haven't noticed, voice-activated gadgets are booming in popularity right now, and finding their way into a growing number of homes as a result. That's led security researchers to worry about the potential vulnerabilities of voice-activated everything. 

Now, per the New York Times, some are claiming that those vulnerabilities include recorded commands at a frequency beyond what humans can hear that can be hidden inside otherwise innocuous-seeming audio. In the wrong hands, they say, such secret commands could be used to send messages, make purchases, wire money -- really anything that these virtual assistants can do already -- all without you realizing it.

All of that takes things a step beyond what we saw last year, when researchers in China showed that inaudible, ultrasonic transmissions could successfully trigger popular voice assistants like Siri , Alexa , Cortana and the Google Assistant. That method, dubbed "DolphinAttack," required the attacker to be within whisper distance of your phone or smart speaker. New studies conducted since suggest that ultrasonic attacks like that one could be amplified and executed at a distance -- perhaps as far away as 25 feet.

The most recent study cited by the Times comes from UC Berkeley researchers Nicholas Carlini and David Wagner. In it, Carlini and Wagner claim that they were able to fool Mozilla's open-source DeepSpeech voice-to-text engine by hiding a secret, inaudible command within audio of a completely different phrase. The pair also claims that the attack worked when they hid the rogue command within brief music snippets.  

"My assumption is that the malicious people already employ people to do what I do," Carlini told the Times, with the paper adding that, "he was confident that in time he and his colleagues could mount successful adversarial attacks against any smart device system on the market."

"We want to demonstrate that it's possible," Carlini added, "and then hope that other people will say, 'Okay this is possible, now let's try and fix it.'"

So what are the makers of these voice platforms doing to protect people? Good question. None of the companies we've talked to have denied that attacks like these are possible -- and none of them have offered up any specific solutions that would seem capable of stopping them from working. None would say, for instance, whether or not their voice platform was capable of distinguishing between different audio frequencies and then blocking ultrasonic commands above 20kHz. Some, like Apple , declined to comment for this story.

Most companies are reluctant to speak publicly about these sorts of security issues. Despite a heavy focus on artificial intelligence and voice at its yearly I/O developer conference this week, Google's keynote presentation hardly mentioned security at all.

"The Google Assistant has several features which will mitigate aggressive actions such as the use of undetectable audio commands," a company spokesperson tells CNET. "For example, users can enable Voice Match, which is designed to prevent the Assistant from responding to requests relating to actions such as shopping, accessing personal information, and similarly sensitive actions unless the device recognizes the user's voice."

Mitigating a potential vulnerability is a good start, but Voice Match -- which isn't foolproof, by the way -- only protects against a limited number of scenarios, as Google itself notes. For instance, it wouldn't stop an attacker from messing with your smart home gadgets or sending a message to someone. And what about users who don't have Voice Match enabled in the first place?

Amazon , meanwhile, offers similar protections for Alexa using its own voice recognition software (which again, can be fooled), and the company also lets users block their Alexa device from making voice purchases unless a spoken PIN code is given. The same goes for unlocking smart locks, where the spoken PIN code isn't just an option, but a default requirement.

All of that said, there's no indication from Amazon that there's anything within Alexa's software capable of preventing attacks like these outright.

"We limit the information we disclose about specific security measures we take," a spokesperson tells CNET, "but what I can tell you is Amazon takes customer security seriously and we have full teams dedicated to ensuring the safety and security of our products." 

The spokesperson goes on to describe Amazon's efforts at keeping the line of voice-activated Echo smart speakers secure, which they say includes "disallowing third party application installation on the device, rigorous security reviews, secure software development requirements and encryption of communication between Echo, the Alexa App and Amazon servers."

That's all well and good, but as attacks like these creep closer and closer to real-world plausibility, manufacturers will likely need to do more to assuage consumer fears. In today's day and age, "trust us" might not cut it.