Smart speakers already face privacy concerns, but now security researchers have found that malicious apps designed to eavesdrop can sneak through Google's and Amazon's vetting processes. On Sunday, Security Research Labs disclosed its findings after developing eight voice apps that could listen in on people's conversations through Amazon's Echo and Google's Nest devices.
All of the apps passed through the companies' reviews for third-party apps. The research was first reported by CNET sister site ZDNet.
Both Amazon and Google said they responded to the discovery.
"Customer trust is important to us, and we conduct security reviews as part of the skill certification process. We quickly blocked the skill in question and put mitigations in place to prevent and detect this type of skill behavior and reject or take them down when identified," Amazon said in a statement.
"All Actions on Google are required to follow our developer policies," Google said in a statement, "and we prohibit and remove any Action that violates these policies. We have review processes to detect the type of behavior described in this report, and we removed the Actions that we found from these researchers. We are putting additional mechanisms in place to prevent these issues from occurring in the future."
Voice-controlled digital-assistant software such as Amazon's Alexa, Google's Assistant and Apple's Siri present a privacy headache, since the devices that use the apps are essentially internet-connected microphones, delivering your conversations to servers at Amazon, Google or Apple. All three companies have been criticized this year forfrom the voice assistants as part of efforts to improve such software's accuracy.
They've taken steps to improve their privacy issues. Apple and Google now require people to opt in to be a part of the accuracy-review program. Amazon also adjusted its privacy settings for Alexa after the backlash.
But security researchers found there's still a lot of room for improvement.
The eavesdropping apps created by the researchers worked by taking advantage of silence. The researchers developed horoscope apps that, when prompted, would respond with an error message. But instead of ending the recording process like an Alexa or Google Assistant skill usually does, it kept listening in the background.
That's because the developers simulated silence by inserting the unicode character sequence "�. " (U+D801, dot, space). That character can't be pronounced, but both Alexa and Google Home's text-to-speech AI attempt to process it anyway, leaving a gap during which it continues listening even after a person thinks the device is finished with the task.
That recorded conversation wasn't just sent to Amazon's and Google's servers, it was also sent to the third-party developers.
The security researchers also demonstrated that they could use these malicious apps to trick people into giving up their passwords. After an extended period of silence, the skills could make the voice assistants say, "An important security update is available for your device. Please say 'start update' followed by your password."
Amazon said it now prevents skills from asking people for their passwords and added that it would never ask people to share their credentials through the voice assistant.
Hacks like these have happened before for Amazon's Alexa. In April 2018, security researchers found ancould keep the skill listening indefinitely, essentially letting any third-party app eavesdrop on people. That vulnerability was tucked away in a calculator app.
The researchers said that they disclosed the newly public vulnerabilities to Amazon and Google earlier this year and that the apps have since been removed.
Originally published Oct. 21 at 5:58 a.m. PT.
Update, 7:24 a.m. PT: Adds statement from Amazon.
Update, 8:30 a.m. PT: Adds statement from Google.