HOLA: Enhancing Audio-visual Deepfake Detection via Hierarchical Contextual Aggregations and Efficient Pre-training

Authors: Xuecheng Wu, Danlei Huang, Heli Sun, Xinyi Yin, Yifan Wang, Hao Wang, Jia Zhang, Fei Wang, Peihao Guo, Suyu Xing, Junxiao Xue, Liang He

Published: 2025-07-30 15:47:12+00:00

AI Summary

HOLA is a two-stage framework for video-level deepfake detection that uses a large-scale (1.81M samples) audio-visual self-supervised pre-training dataset. It features hierarchical contextual aggregations (iterative cross-modal learning, local-global fusion, pyramid refiner) and a pseudo-supervised signal injection strategy for improved performance, achieving first place in the 2025 1M-Deepfakes Detection Challenge.

Abstract

Advances in Generative AI have made video-level deepfake detection increasingly challenging, exposing the limitations of current detection techniques. In this paper, we present HOLA, our solution to the Video-Level Deepfake Detection track of 2025 1M-Deepfakes Detection Challenge. Inspired by the success of large-scale pre-training in the general domain, we first scale audio-visual self-supervised pre-training in the multimodal video-level deepfake detection, which leverages our self-built dataset of 1.81M samples, thereby leading to a unified two-stage framework. To be specific, HOLA features an iterative-aware cross-modal learning module for selective audio-visual interactions, hierarchical contextual modeling with gated aggregations under the local-global perspective, and a pyramid-like refiner for scale-aware cross-grained semantic enhancements. Moreover, we propose the pseudo supervised singal injection strategy to further boost model performance. Extensive experiments across expert models and MLLMs impressivly demonstrate the effectiveness of our proposed HOLA. We also conduct a series of ablation studies to explore the crucial design factors of our introduced components. Remarkably, our HOLA ranks 1st, outperforming the second by 0.0476 AUC on the TestA set.


Key findings
HOLA achieved first place in the 2025 1M-Deepfakes Detection Challenge, outperforming the second-best model by 0.0476 AUC on the TestA set. Ablation studies confirmed the effectiveness of each component. Qualitative analysis demonstrated superior performance compared to advanced visual-language models.
Approach
HOLA uses a two-stage approach: first, large-scale audio-visual self-supervised pre-training on a 1.81M sample dataset to learn general representations. Second, it fine-tunes these representations using hierarchical contextual aggregation modules and a pseudo-supervised signal injection strategy for deepfake detection.
Datasets
AV-Deepfake1M++, CelebV-HQ, CNC-AV series, HDTF, MSD-Wild-DB
Model(s)
Transformer-based encoders for audio and video, with iterative-aware cross-modal learning module, local-global contextual fusion module, and pyramid-like refiner.
Author countries
China, China, China, China, China, China, China, China, China, China, China, China