Informizely customer feedback surveys
By using the Aculab site, you agree with our use of cookies.

How accurate is voice biometrics?

First of all, no biometric system is 100% foolproof. Industry reports and studies indicate that success rates above 90% should be the minimum acceptable, where success means that a person is able to authenticate their rightfully claimed identity.

Regardless of competitive claims, a clue to the nature of accuracy lies in the existence of three important errors that reflect the performance of a system. Those measures are the equal error rate (EER), the false acceptance rate (FAR), and the false rejection rate (FRR). Allowing an impostor to get in to the system is an error of false acceptance. Denying a genuine user access to the system is a false rejection error.

Industry reports indicate that optimal, text-dependent voice biometric engines can achieve a FAR of below 1%, with a corresponding FRR of less than 3%. However, it is difficult to compare systems based on published ‘accuracy’ figures. That’s partly because there is no industry standard dataset against which to measure performance, and partly because threshold settings and similarity ratings are vendor and implementation specific.

Notably, the point at which the FAR and FRR curves intersect is the EER. Thus, EER is the commonly accepted metric used to compare the separability of systems i.e., their effectiveness at differentiating between genuine users and impostors. That’s because unlike FAR and FRR, EER is independent of the threshold setting.

Notwithstanding all that, the sensible thing to do in relation to voice biometrics is to run trials in your target environment, based on real-world users.