Combating the rise of AI using Voice as a signal

AI technology is not a new thing. However, as publicized broadly, the sophistication and the applicability of it is now growing exponentially making it more accessible and expanding the ways it can be used.

The positive applications of it are extensive - from disease prevention to city planning, it will transform our world and our societies.

However, for all the good uses, there are equal amounts of bad. The fraudsters are now armed with ever-growing sophisticated solutions to identify and exploit vulnerabilities. For those with a stake in security and identity, particular focus needs to be paid to GAI and GAN. Generative AI (GAI) and its subset, the Generative Adversarial Network (GAN) which are evolutions from Machine Learning that enable more and more sophisticated spoofing of anything from Images to Voices.

Anyone who has been on social media in recent times or even seen the latest blockbuster movies like Indiana Jones can see it in action. Machines creating images, videos, and voices that cannot be distinguished, easily, from the human input / subject. A prime and scary example is the alarming deepfake featuring Morgan Freeman. This unsettling reality amplifies the notion that our current comprehension of these advancements is just scratching the surface. We stand on the verge of recognizing that what we know is akin to child's play when compared to the truly pervasive yet concealed capabilities for manipulating identity and constructing entire digital personas. 

The digital economy needs to take this very seriously. Much of the ecosystem is backboned by Identity verification and fraud management solutions that focus on onboarding-customers through a combination of data, document verification, and the biometric matching of ID document images to faces (Facematch) & liveness checks (Snapshots from the device’s camera to prove presence). Previously, this combination of services was considered secure, but moving forward this will become increasingly vulnerable.

Reverification of customers, once they have been onboarded, has traditionally required the use of a combination of knowledge-based questions such as passwords and usernames plus the identification of a trusted device. Today many organizations have added one time passcodes (OTP) or authentication apps to prove that the trusted device is in the hands of the person attempting to gain access to services. Whilst this “step up” or Multifactor authentication improves security it doesn't prove that the authorized person is attempting to access the service. This cumbersome process of finding the authentication app or SMS’d OTP merely confirms that the person attempting access to secure services has physical or virtual control of the trusted device.

This is where Voice intelligence and other signals can start to help. Voice can be applied to multiple use cases to bolster security and enhance existing identity and security programs: 

  1. Adding voice at the point of onboarding increases the difficulty for a fraudster to gain all biometric and ID fraud signals; more signals = lower probability of being hijacked. It’s math.

  2. The best voice technology is much better at identifying recorded and synthetic voices so acts as a much better fraud check in onboarding and reverification. Voices can be harder to spoof as the technology can analyze the voice print patterns created by the machine.

  3. When allowing access to services or reverification, adding verification of an existing voice biometric alongside the MFA or OTP requirement significantly enhances security.

  4. These above factors can eliminate account takeover.

  5. It’s more accessible than any other form factor, allowing all parts of society to access services

  6. Capturing consent and the voice print at the point of onboarding will ensure you have the voice biometric to be used in all future customer engagements, If you have a call center or chat channel for example, you will already have your customer voice biometric to be used to seamless access those channels, securely.

  7. 1:1 and 1:Many fraud analytics based on voice is powerful and even more powerful with more voices to analyze based on enrolling more of your customers; it can identify fraudsters across a network with pinpoint accuracy and help remove fraud from a network completely. 

Let’s be clear; voice alone cannot determine whether a person is who they say they are for onboarding, it’ll always need to be combined with other approaches and signals. But once trusted, voice is the best and most reliable biometric and signal available, and when combined with others significantly increases security without adding any friction into the process. It’s faster to speak to a mic than to load a camera and record a face (or at least just as fast :) )

Get in touch to talk about how voice can be trialed in your telephony, chat, mobile, and web channels to both beef up security and provide another low-friction option to access services.

Visit www.aurayasystems.com or email us at info@aurayasystems.com.


Previous
Previous

Auraya Returns as Sponsor for Five9 Sales Kick-Off 2024!

Next
Next

Beating fraud with Facial or Voice Biometrics – What’s best?