KLASSify to Verify: Audio-Visual Deepfake Detection Using SSL-based Audio and Handcrafted Visual Features

Authors: Ivan Kukanov, Jun Wah Ng

Published: 2025-08-10 13:29:08+00:00

AI Summary

This paper proposes a multimodal deepfake detection approach using handcrafted visual features and self-supervised learning (SSL)-based audio features. The approach achieves a balance between performance and real-world deployment by leveraging interpretable handcrafted visual features and robust SSL-based audio representations, resulting in high accuracy deepfake detection and localization.

Abstract

The rapid development of audio-driven talking head generators and advanced Text-To-Speech (TTS) models has led to more sophisticated temporal deepfakes. These advances highlight the need for robust methods capable of detecting and localizing deepfakes, even under novel, unseen attack scenarios. Current state-of-the-art deepfake detectors, while accurate, are often computationally expensive and struggle to generalize to novel manipulation techniques. To address these challenges, we propose multimodal approaches for the AV-Deepfake1M 2025 challenge. For the visual modality, we leverage handcrafted features to improve interpretability and adaptability. For the audio modality, we adapt a self-supervised learning (SSL) backbone coupled with graph attention networks to capture rich audio representations, improving detection robustness. Our approach strikes a balance between performance and real-world deployment, focusing on resilience and potential interpretability. On the AV-Deepfake1M++ dataset, our multimodal system achieves AUC of 92.78% for deepfake classification task and IoU of 0.3536 for temporal localization using only the audio modality.


Key findings
The multimodal system achieved an AUC of 92.78% for deepfake classification and an IoU of 0.3536 for temporal localization using only the audio modality on the AV-Deepfake1M++ dataset. Handcrafted visual features improved performance, while the audio model showed high robustness to unseen attacks. The audio localization model, however, struggled with cross-dataset generalization.
Approach
The authors use handcrafted visual features (blurriness, color shifts, landmark kinematics) with a Temporal Convolution Network (TCN) and SSL-based audio features (Wav2Vec-AASIST) for audio classification. For localization, they adapt a Boundary-aware Attention Mechanism (BAM) for audio and a TCN with a tagging head for video. A shallow fusion combines the calibrated scores.
Datasets
AV-Deepfake1M++
Model(s)
Temporal Convolution Network (TCN), Wav2Vec-AASIST, Boundary-aware Attention Mechanism (BAM), WavLM
Author countries
Singapore