ArmorVox advanced security tracks fraud related to synthetic speech, digital voice and playback attacks

Home / News & Media / ArmorVox advanced security tracks fraud related to synthetic speech, digital voice and playback attacks
 
ArmorVox advanced security tracks fraud related to synthetic speech, digital voice and playback attacks.

At Auraya, we design solutions that help businesses manage risks. Lately, this includes voice imitation, synthetic speech and playback attacks.

Start-ups such as Lyrebird have been in the news for a technology that creates digital voice from existing audio files. The technology is impressive and has numerous applications. However, it can be a potential threat to voice biometrics deployments.

ArmorVox advanced security delivers a robust and reliable technological solution to manage fraud by addressing this risk. 

 
Replay Attack Detection

A major difference between natural voice and pre-recorded voice is that natural voice varies slightly - even when the individual is repeating the exact phrase or digits a few times. A pre-recorded voice sounds the exact same each time.

Tone, pitch, speed, and memory and attention of an individual while speaking makes natural voice vary each time. Such changes are expected and are actively learned by the underlying machine learning algorithms of ArmorVox.

On the other hand, if the voice sounds ‘too perfect’, the technology flags this as a potential fraud and increases the security setting. Increased security setting may mean a random digit string that appears on the speaker’s device that needs to be said by the enrolled speaker immediately, thus reducing the capability of a fraudster to use pre-recorded or synthetically created responses.

 

Random Challenge Response

The Random Challenge Response is a unique digit sequence that appears on the speaker’s device for enhanced security. This ensures liveness testing - that along with an individual’s voice, they can speak the correct digits displayed on the device to be successfully verified. This is an additional security setting that can be raised when the technology detects a possible fraudulent attempt such as a playback attack.

 

Sound Detection

To ensure sounds such as tones, musical instruments, noise and synthetic voice are not used to enrol a voice print, a sound detection model tracks every voice enrolment and only allows  human generated voice to be enrolled.

The sound detection model is designed with underlying characteristics of natural voice which is distinctly different to artificial voice. It is sensitive to other sounds that are artificially created and alerts organisations of fraud in real-time.

 

Voice Modelling

With new and emerging security threats, the team at Auraya can design voice models that capture distinct sound characteristics of the threat – and integrate the model to ArmorVox to differentiate between true speakers and fraudulent attempts. With our security design, we ensure that businesses stay a step ahead in managing risk and have sustainable business outcomes by implementing voice biometrics.

The ArmorVox Advantage

AURAYA SYSTEMS

Auraya is a world leader in biometric voice verification technology and empowers people and organizations to interact and engage with security and convenience. As a specialist voice biometric technology developer, we have a track record of delivering unparalleled security performance that is simple to deploy, integrate and maintain whilst delivering the most delightful customer experience.

At Auraya, we believe that the way you implement voice biometrics can make all the difference in realizing business value. We have spent decades gaining unique, real-world insights into key business drivers and customer expectations for voice biometrics and innovate for evolving market opportunities. Our proprietary voice biometrics technology ArmorVox helps support real-time fraud reduction with the Advanced Security Suite.

Connect on LinkedIn
Send us an e-mail
RSS
Share on Google+
https://aurayasystems.com/2017/12/08/armorvox-advanced-security-tracks-fraud-related-to-synthetic-speech-digital-voice-and-playback-attacks">
Follow on Facebook

Leave a Reply

Your email address will not be published. Required fields are marked *