Towards robust audio spoofing detection: a detailed comparison of traditional and learned features

Authors: Balamurali BT, Kin Wah Edward Lin, Simon Lui, Jer-Ming Chen, Dorien Herremans

Published: 2019-05-28 06:51:18+00:00

AI Summary

This research introduces a robust audio spoofing detection system that generalizes across various replay spoofing techniques, unlike most existing systems. It achieves this by comparing traditional audio features with features learned via an autoencoder, ultimately demonstrating the importance of combining both for optimal performance.

Abstract

Automatic speaker verification, like every other biometric system, is vulnerable to spoofing attacks. Using only a few minutes of recorded voice of a genuine client of a speaker verification system, attackers can develop a variety of spoofing attacks that might trick such systems. Detecting these attacks using the audio cues present in the recordings is an important challenge. Most existing spoofing detection systems depend on knowing the used spoofing technique. With this research, we aim at overcoming this limitation, by examining robust audio features, both traditional and those learned through an autoencoder, that are generalizable over different types of replay spoofing. Furthermore, we provide a detailed account of all the steps necessary in setting up state-of-the-art audio feature detection, pre-, and postprocessing, such that the (non-audio expert) machine learning researcher can implement such systems. Finally, we evaluate the performance of our robust replay speaker detection system with a wide variety and different combinations of both extracted and machine learned audio features on the `out in the wild' ASVspoof 2017 dataset. This dataset contains a variety of new spoofing configurations. Since our focus is on examining which features will ensure robustness, we base our system on a traditional Gaussian Mixture Model-Universal Background Model. We then systematically investigate the relative contribution of each feature set. The fused models, based on both the known audio features and the machine learned features respectively, have a comparable performance with an Equal Error Rate (EER) of 12. The final best performing model, which obtains an EER of 10.8, is a hybrid model that contains both known and machine learned features, thus revealing the importance of incorporating both types of features when developing a robust spoofing prediction model.


Key findings
The hybrid model, combining traditional and autoencoder-learned features, achieved the best performance with an EER of 10.8. Individual models based on different feature sets showed varying performance, with Constant Q Cepstral Coefficients (CQCCs) initially performing best among traditional features. Data augmentation using the autoencoder did not consistently improve performance for all feature sets.
Approach
The authors developed a hybrid system with two branches: one processing traditional audio features and the other processing features learned by an autoencoder. Both branches use a GMM-UBM model, and their outputs are fused using logistic regression to improve accuracy.
Datasets
ASVspoof 2017 dataset (protocol V2)
Model(s)
Gaussian Mixture Model-Universal Background Model (GMM-UBM), Autoencoder, Logistic Regression (for fusion)
Author countries
Singapore, China