Characterizing the temporal dynamics of universal speech representations for generalizable deepfake detection

Authors: Yi Zhu, Saurabh Powar, Tiago H. Falk

Published: 2023-09-15 01:37:45+00:00

AI Summary

This paper addresses the lack of generalizability in deepfake speech detection systems by focusing on the long-term temporal dynamics of universal speech representations. The authors propose a method to characterize these dynamics, showing that different generative models produce similar dynamic patterns, leading to improved deepfake detection performance on unseen attacks.

Abstract

Existing deepfake speech detection systems lack generalizability to unseen attacks (i.e., samples generated by generative algorithms not seen during training). Recent studies have explored the use of universal speech representations to tackle this issue and have obtained inspiring results. These works, however, have focused on innovating downstream classifiers while leaving the representation itself untouched. In this study, we argue that characterizing the long-term temporal dynamics of these representations is crucial for generalizability and propose a new method to assess representation dynamics. Indeed, we show that different generative models generate similar representation dynamics patterns with our proposed method. Experiments on the ASVspoof 2019 and 2021 datasets validate the benefits of the proposed method to detect deepfakes from methods unseen during training, significantly improving on several benchmark methods.


Key findings
The proposed method significantly improves the generalizability of deepfake speech detection to unseen attacks, outperforming several benchmark methods on the ASVspoof 2021 dataset. Consistent patterns in long-term temporal dynamics were observed across different deepfake generation methods, supporting the approach's effectiveness.
Approach
The authors propose a modulation transformation block (MTB) that applies a short-time Fourier transform to universal speech representations (wav2vec2 and wavLM) to capture long-term temporal dynamics. These dynamics are then used as input to a simple fully connected network for deepfake detection.
Datasets
ASVspoof 2019 (LA track) and ASVspoof 2021 (DF track), LJspeech, WaveFake
Model(s)
wav2vec2-960h, wavLM-base-plus, LFCC+GMM, RawNet2, Fully connected neural networks.
Author countries
Canada