By using the Aculab site, you agree with our use of cookies.

What about languages?

Fortunately, language isn’t a big issue, because voice biometrics works on the sounds that people make, rather than on what they say. Some vendors will need to train their voice biometric engine to get the best results for each language, because different languages use different sets of phonemes, or produce a different engine for each language. However, the best systems will operate independent of language, because they cater for a wide range of language sounds.

That’s not to say that a system can’t be fine-tuned, but training a system in such a way can involve more than just language. It’s feasible for a system to be fine-tuned for a fixed, text-dependent passphrase and for the speaker domain (e.g., environment, networks, and devices), but the most beneficial effects can be gained by applying best practices during set-up and implementation.

Where language is a factor, is where speech recognition is used in tandem with speaker verification (speaker recognition) to validate what is said in addition to who said what. However, that is a separate issue, which has no bearing on the performance of the voice biometric engine.