Detection and Evaluation of human and machine generated speech in spoofing attacks on automatic speaker verification systems

Authors: Yang Gao, Jiachen Lian, Bhiksha Raj, Rita Singh

Published: 2020-11-07 04:42:27+00:00

AI Summary

This paper investigates the effectiveness of human and machine-generated speech in spoofing automatic speaker verification (ASV) systems. It proposes using features capturing the fine-grained inconsistencies of human speech production to detect deepfakes, demonstrating that fundamental frequency sequence-related entropy, spectral envelope, and aperiodic parameters are promising for robust deepfake audio detection.

Abstract

Automatic speaker verification (ASV) systems utilize the biometric information in human speech to verify the speaker's identity. The techniques used for performing speaker verification are often vulnerable to malicious attacks that attempt to induce the ASV system to return wrong results, allowing an impostor to bypass the system and gain access. Attackers use a multitude of spoofing techniques for this, such as voice conversion, audio replay, speech synthesis, etc. In recent years, easily available tools to generate deepfaked audio have increased the potential threat to ASV systems. In this paper, we compare the potential of human impersonation (voice disguise) based attacks with attacks based on machine-generated speech, on black-box and white-box ASV systems. We also study countermeasures by using features that capture the unique aspects of human speech production, under the hypothesis that machines cannot emulate many of the fine-level intricacies of the human speech production mechanism. We show that fundamental frequency sequence-related entropy, spectral envelope, and aperiodic parameters are promising candidates for robust detection of deepfaked speech generated by unknown methods.


Key findings
Deepfake audio attacks are more effective than human impersonation or other synthetic methods. Features reflecting the inconsistencies of human speech production (e.g., fundamental frequency entropy, spectral envelope, aperiodic parameters) effectively distinguish real and fake speech, improving ASV robustness. Combining these features with existing ASV features further enhances detection performance.
Approach
The authors hypothesize that machine-generated speech lacks the fine-level inconsistencies of human speech. They evaluate features like fundamental frequency sequence-related entropy, spectral envelope, and aperiodic parameters to distinguish between human and machine-generated speech, using these features to improve ASV system performance against spoofing attacks.
Datasets
ASVspoof 2019 (logical access data), VoxCeleb, Fake or Real (FoR) dataset, and a custom impersonation dataset (CID).
Model(s)
A Thin ResNet-34 based ASV model with Self-attentive Pooling (SAP), and a three-layer MLP for countermeasure model, and a modified residual net architecture for spoofing countermeasures.
Author countries
USA