Investigating Prosodic Signatures via Speech Pre-Trained Models for Audio Deepfake Source Attribution

Authors: Orchid Chetia Phukan, Drishti Singh, Swarup Ranjan Behera, Arun Balaji Buduru, Rajesh Sharma

Published: 2024-12-23 18:53:15+00:00

AI Summary

This research investigates the use of speech pre-trained models (PTMs) for audio deepfake source attribution (ADSD). It finds that the x-vector model, a speaker recognition PTM, achieves the best performance, and proposes FINDER, a novel fusion framework, to further improve ADSD accuracy by combining PTM representations.

Abstract

In this work, we investigate various state-of-the-art (SOTA) speech pre-trained models (PTMs) for their capability to capture prosodic signatures of the generative sources for audio deepfake source attribution (ADSD). These prosodic characteristics can be considered one of major signatures for ADSD, which is unique to each source. So better is the PTM at capturing prosodic signs better the ADSD performance. We consider various SOTA PTMs that have shown top performance in different prosodic tasks for our experiments on benchmark datasets, ASVSpoof 2019 and CFAD. x-vector (speaker recognition PTM) attains the highest performance in comparison to all the PTMs considered despite consisting lowest model parameters. This higher performance can be due to its speaker recognition pre-training that enables it for capturing unique prosodic characteristics of the sources in a better way. Further, motivated from tasks such as audio deepfake detection and speech recognition, where fusion of PTMs representations lead to improved performance, we explore the same and propose FINDER for effective fusion of such representations. With fusion of Whisper and x-vector representations through FINDER, we achieved the topmost performance in comparison to all the individual PTMs as well as baseline fusion techniques and attaining SOTA performance.


Key findings
The x-vector model consistently outperformed other PTMs. FINDER, a proposed fusion method, significantly improved ADSD performance, achieving state-of-the-art results when combining Whisper and x-vector representations. Fusion of PTM representations generally improved performance over individual PTMs.
Approach
The authors evaluate several state-of-the-art speech PTMs for their ability to capture prosodic features for ADSD. They propose FINDER, a novel fusion framework using Rényi divergence, to combine representations from multiple PTMs and improve classification accuracy.
Datasets
ASVSpoof 2019 and CFAD
Model(s)
Wav2vec2, WavLM, XLS-R, Whisper, x-vector, Wav2vec2-emo. FINDER framework for fusing representations.
Author countries
India, Estonia