DFALLM: Achieving Generalizable Multitask Deepfake Detection by Optimizing Audio LLM Components

Authors: Yupei Li, Li Wang, Yuxiang Wang, Lei Wang, Rizhao Cai, Jie Shi, Björn W. Schuller, Zhizheng Wu

Published: 2025-12-09 09:36:38+00:00

AI Summary

This study proposes DFALLM, an Audio Large Language Model (ALLM) framework designed for generalizable and multitask audio deepfake detection. It addresses previous ALLM generalization bottlenecks by systematically optimizing audio encoder and text-based LLM components. DFALLM achieves state-of-the-art performance across multiple datasets for binary deepfake detection and demonstrates competitive capabilities in advanced tasks like spoof attribution and localization.

Abstract

Audio deepfake detection has recently garnered public concern due to its implications for security and reliability. Traditional deep learning methods have been widely applied to this task but often lack generalisability when confronted with newly emerging spoofing techniques and more tasks such as spoof attribution recognition rather than simple binary classification. In principle, Large Language Models (LLMs) are considered to possess the needed generalisation capabilities. However, previous research on Audio LLMs (ALLMs) indicates a generalization bottleneck in audio deepfake detection performance, even when sufficient data is available. Consequently, this study investigates the model architecture and examines the effects of the primary components of ALLMs, namely the audio encoder and the text-based LLM. Our experiments demonstrate that the careful selection and combination of audio encoders and text-based LLMs are crucial for unlocking the deepfake detection potential of ALLMs. We further propose an ALLM structure capable of generalizing deepfake detection abilities to out-of-domain spoofing tests and other deepfake tasks, such as spoof positioning and spoof attribution recognition. Our proposed model architecture achieves state-of-the-art (SOTA) performance across multiple datasets, including ASVSpoof2019, InTheWild, and Demopage, with accuracy reaching up to 95.76% on average, and exhibits competitive capabilities in other deepfake detection tasks such as attribution, and localisation compared to SOTA audio understanding models. Data and codes are provided in supplementary materials.


Key findings
The choice of the audio encoder is the decisive factor for performance, with acoustically-aware Wav2Vec2-BERT significantly outperforming semantic-optimized Whisper. The optimal configuration, Wav2Vec2-BERT combined with Qwen2.5-0.5B, achieved state-of-the-art average accuracy of 95.76% and effectively generalized across multitask deepfake detection, attribution, and localization. Higher audio frame rates and sufficient training data were also found to enhance detection accuracy.
Approach
The DFALLM framework is a speech language model composed of an audio encoder, a text tokenizer, and a textual LLM. It processes raw audio through the encoder, maps its representations to the text embedding space via a projection module, and then combines these with textual prompt embeddings for the LLM to generate task-specific responses. The approach emphasizes using acoustically-aware audio encoders (like Wav2Vec2-BERT) and lightweight textual LLMs (like Qwen2.5-0.5B), while employing a prompt-based multitask strategy for detection, attribution, and localization.
Datasets
ASVSpoof2019, SpoofCeleb, MLAADv6, ReplayDF, DFADD, AISHELL3, ADD2023, GigaSpeech, CNCeleb, PartialSpoof, In-the-Wild (ITW), Demopage.
Model(s)
Audio Encoders: Whisper (small, medium, large-v3), Wav2Vec2-BERT. Textual LLMs: Qwen2.5 (0.5B, 1.5B, 7B), Qwen3-0.6B, Llama-1.3B. Fine-tuning technique: LoRA (Low-Rank Adaptation).
Author countries
UK, China, Singapore, Germany