Think Twice before Adaptation: Improving Adaptability of DeepFake Detection via Online Test-Time Adaptation

Authors: Hong-Hanh Nguyen-Le, Van-Tuan Tran, Dinh-Thuc Nguyen, Nhien-An Le-Khac

Published: 2025-05-24 16:58:53+00:00

AI Summary

This paper introduces Think Twice before Adaptation (T$^2$A), a novel online test-time adaptation method for deepfake detection that improves detector adaptability to post-processing manipulations and distribution shifts without needing training data or labels. T$^2$A achieves this by employing an uncertainty-aware negative learning objective alongside entropy minimization, prioritizing uncertain samples, and selectively updating model parameters.

Abstract

Deepfake (DF) detectors face significant challenges when deployed in real-world environments, particularly when encountering test samples deviated from training data through either postprocessing manipulations or distribution shifts. We demonstrate postprocessing techniques can completely obscure generation artifacts presented in DF samples, leading to performance degradation of DF detectors. To address these challenges, we propose Think Twice before Adaptation (texttt{T$^2$A}), a novel online test-time adaptation method that enhances the adaptability of detectors during inference without requiring access to source training data or labels. Our key idea is to enable the model to explore alternative options through an Uncertainty-aware Negative Learning objective rather than solely relying on its initial predictions as commonly seen in entropy minimization (EM)-based approaches. We also introduce an Uncertain Sample Prioritization strategy and Gradients Masking technique to improve the adaptation by focusing on important samples and model parameters. Our theoretical analysis demonstrates that the proposed negative learning objective exhibits complementary behavior to EM, facilitating better adaptation capability. Empirically, our method achieves state-of-the-art results compared to existing test-time adaptation (TTA) approaches and significantly enhances the resilience and generalization of DF detectors during inference. Code is available href{https://github.com/HongHanh2104/T2A-Think-Twice-Before-Adaptation}{here}.


Key findings
T$^2$A achieves state-of-the-art results compared to existing test-time adaptation methods. It significantly improves the resilience and generalization of deepfake detectors when facing unknown post-processing techniques and distribution shifts. The integration of T$^2$A substantially enhances the performance of various deepfake detection models.
Approach
T$^2$A uses an online test-time adaptation approach. It incorporates an uncertainty-aware negative learning objective to explore alternative class options, complements entropy minimization, prioritizes uncertain samples via Focal Loss, and employs a gradients masking technique to focus updates on important model parameters.
Datasets
FaceForensics++, CelebDF-v1, CelebDF-v2, DeepFakeDetection, DeepFake Detection Challenge Preview, UADFV, FaceShifter
Model(s)
Xception (as the source model), EfficientNetB4, F3Net, CORE, RECCE
Author countries
Ireland, Ireland, Vietnam