Characterizing the temporal dynamics of universal speech representations for generalizable deepfake detection
Authors: Yi Zhu, Saurabh Powar, Tiago H. Falk
Published: 2023-09-15 01:37:45+00:00
AI Summary
This paper addresses the lack of generalizability in deepfake speech detection systems by focusing on the long-term temporal dynamics of universal speech representations. The authors propose a method to characterize these dynamics, showing that different generative models produce similar dynamic patterns, leading to improved deepfake detection performance on unseen attacks.
Abstract
Existing deepfake speech detection systems lack generalizability to unseen attacks (i.e., samples generated by generative algorithms not seen during training). Recent studies have explored the use of universal speech representations to tackle this issue and have obtained inspiring results. These works, however, have focused on innovating downstream classifiers while leaving the representation itself untouched. In this study, we argue that characterizing the long-term temporal dynamics of these representations is crucial for generalizability and propose a new method to assess representation dynamics. Indeed, we show that different generative models generate similar representation dynamics patterns with our proposed method. Experiments on the ASVspoof 2019 and 2021 datasets validate the benefits of the proposed method to detect deepfakes from methods unseen during training, significantly improving on several benchmark methods.