The PartialSpoof Database and Countermeasures for the Detection of Short Fake Speech Segments Embedded in an Utterance

Authors: Lin Zhang, Xin Wang, Erica Cooper, Nicholas Evans, Junichi Yamagishi

Published: 2022-04-11 15:09:07+00:00

AI Summary

This paper introduces a new spoofing scenario, Partial Spoof (PS), where synthesized speech segments are embedded within bona fide utterances. It proposes improved countermeasures (CMs) using self-supervised pre-trained models for feature extraction and a new CM architecture that leverages segment-level labels at multiple temporal resolutions for both utterance and segment-level detection, achieving low error rates on the PartialSpoof and ASVspoof 2019 LA databases.

Abstract

Automatic speaker verification is susceptible to various manipulations and spoofing, such as text-to-speech synthesis, voice conversion, replay, tampering, adversarial attacks, and so on. We consider a new spoofing scenario called Partial Spoof (PS) in which synthesized or transformed speech segments are embedded into a bona fide utterance. While existing countermeasures (CMs) can detect fully spoofed utterances, there is a need for their adaptation or extension to the PS scenario. We propose various improvements to construct a significantly more accurate CM that can detect and locate short-generated spoofed speech segments at finer temporal resolutions. First, we introduce newly developed self-supervised pre-trained models as enhanced feature extractors. Second, we extend our PartialSpoof database by adding segment labels for various temporal resolutions. Since the short spoofed speech segments to be embedded by attackers are of variable length, six different temporal resolutions are considered, ranging from as short as 20 ms to as large as 640 ms. Third, we propose a new CM that enables the simultaneous use of the segment-level labels at different temporal resolutions as well as utterance-level labels to execute utterance- and segment-level detection at the same time. We also show that the proposed CM is capable of detecting spoofing at the utterance level with low error rates in the PS scenario as well as in a related logical access (LA) scenario. The equal error rates of utterance-level detection on the PartialSpoof database and ASVspoof 2019 LA database were 0.77 and 0.90%, respectively.


Key findings
The proposed CM achieved low equal error rates (EERs) of 0.77% and 0.90% for utterance-level detection on the PartialSpoof and ASVspoof 2019 LA databases, respectively. The multi-resolution training strategy improved utterance-level detection but showed mixed results for segment-level detection, performing better at coarser resolutions. Analysis revealed that detection accuracy was impacted by the number of concatenated boundaries in the spoofed segments and the strength of the spoofing systems used.
Approach
The authors improve audio deepfake detection by using self-supervised pre-trained models as feature extractors and a new countermeasure (CM) architecture. This CM uses segment-level labels at multiple temporal resolutions (20ms to 640ms) to simultaneously perform utterance- and segment-level detection.
Datasets
PartialSpoof database (extended with segment labels at various temporal resolutions), ASVspoof 2019 LA database
Model(s)
wav2vec 2.0, WavLM (as feature extractors), gated multilayer perceptron (gMLP) for scoring module in the back-end. The back-end also incorporates bidirectional long short-term memory (BLSTM) layers in some experiments.
Author countries
Japan, France