Voice Assistant Enabled Devices Hacked with Light Commands

voice-assistant-enabled-devices-hacked-with-light-commands.jpg

Each year, consumers are using voice assistant enabled devices such as Amazon Alexa and Google Home more frequently. The ease of using your voice to issue commands such as locking and unlocking doors or searching things on the internet have provided consumers with great convenience. Although voice assistant technology continues to improve, there is still a missing component. These devices lack security.

Recently, a group of academic researchers from University of Michigan and The University of Electro-Communications were able to hack several voice assistant enabled devices using laser-based audio injection attacks. They stated, “Light Commands is a vulnerability of MEMS microphones that allows attackers to remotely inject inaudible and invisible commands into voice assistants”. They tested light commands on various voice recognition systems such as Amazon Alexa, Google Assistant and Siri to issue commands, control smart home devices, start certain vehicles, brute force smart locks and even make online purchases.

In terms of using speaker recognition to combat light commands, the research was conducted with speaker recognition turned off by default for smart speakers and turned on by default for devices like phones and tablets. Although, the only verification attempt that really takes place within these devices are the ‘Ok Google’ or ‘Alexa’ wake-up words. This discovery only further highlights the need for better security such as that of a more powerful voice biometric system.

With Auraya’s voice biometric technologies, organizations like Amazon and Google can implement secure and convenient voice authentication capabilities on their smart devices. This means users can use voice assistant enabled devices in a more secure manner. For example, through Auraya’s ArmorVox engine, passive verification can be used while users are issuing a command to ensure that the command is being issued by a verified user. Once the command is identified, active verification can be used to authenticate the transaction or command by having the user say a text or digit independent phrase. This can be a secret password or the user’s customer account number. This method provides a step-up security to the general wake-up words and is especially important for commands that have higher risks such as monetary transactions or unlocking doors. Additionally, to successfully create a voiceprint, high level voice characteristics are obtained from voice samples. This means using online text-to-speech synthesis tools as used in the research will not work as Auraya’s voice biometric features detect synthetic and playback voice recordings. Synthetic and recorded voices are easily detected and can be flagged and monitored.

 

Previous
Previous

How Auraya’s Voice Biometrics Technology Complies with Mastercard’s Six Key Principles to Data Practices

Next
Next

EVA Deployed for Identity Verification Capability