Spoofing Detection Goes Noisy: An Analysis of Synthetic Speech Detection in the Presence of Additive Noise

Authors: Cemal Hanilci, Tomi Kinnunen, Md Sahidullah, Aleksandr Sizov

Published: 2016-03-12 17:44:48+00:00

AI Summary

This research analyzes the robustness of state-of-the-art synthetic speech detectors under additive noise. It compares various acoustic feature sets and back-end models (GMM and i-vector) to determine their performance in noisy conditions, revealing significant performance degradation even at high signal-to-noise ratios.

Abstract

Automatic speaker verification (ASV) technology is recently finding its way to end-user applications for secure access to personal data, smart services or physical facilities. Similar to other biometric technologies, speaker verification is vulnerable to spoofing attacks where an attacker masquerades as a particular target speaker via impersonation, replay, text-to-speech (TTS) or voice conversion (VC) techniques to gain illegitimate access to the system. We focus on TTS and VC that represent the most flexible, high-end spoofing attacks. Most of the prior studies on synthesized or converted speech detection report their findings using high-quality clean recordings. Meanwhile, the performance of spoofing detectors in the presence of additive noise, an important consideration in practical ASV implementations, remains largely unknown. To this end, we analyze the suitability of state-of-the-art synthetic speech detectors under additive noise with a special focus on front-end features. Our comparison includes eight acoustic feature sets, five related to spectral magnitude and three to spectral phase information. Our extensive experiments on ASVSpoof 2015 corpus reveal several important findings. Firstly, all the countermeasures break down even at relatively high signal-to-noise ratios (SNRs) and fail to generalize to noisy conditions. Secondly, speech enhancement is not found helpful. Thirdly, GMM back-end generally outperforms the more involved i-vector back-end. Fourthly, concerning the compared features, the Mel-frequency cepstral coefficients (MFCCs) and subband spectral centroid magnitude coefficients (SCMCs) perform the best on average though the winner method depends on SNR and noise type. Finally, a study with two score fusion strategies shows that combining different feature based systems improves recognition accuracy for known and unknown attacks in both clean and noisy conditions.


Key findings
All countermeasures performed poorly even at high SNRs, failing to generalize to noisy conditions. Speech enhancement techniques did not improve performance. GMM back-ends generally outperformed i-vector back-ends, and score fusion improved accuracy for both known and unknown attacks.
Approach
The authors evaluated existing synthetic speech detection methods using the ASVSpoof 2015 corpus, adding various types and levels of noise. They compared eight acoustic feature sets (five magnitude-based, three phase-based) with GMM and i-vector back-ends, also investigating score fusion techniques.
Datasets
ASVSpoof 2015 corpus with added white, babble, and car noise at SNRs of 0, 10, and 20 dB.
Model(s)
Gaussian Mixture Models (GMM) and i-vector with cosine and PLDA scoring.
Author countries
Finland, Turkey