RoGA: Towards Generalizable Deepfake Detection through Robust Gradient Alignment
Authors: Lingyu Qiu, Ke Jiang, Xiaoyang Tan
Published: 2025-05-27 03:02:21+00:00
AI Summary
This paper introduces RoGA, a novel learning objective for deepfake detection that aligns generalization gradient updates with empirical risk minimization (ERM) gradient updates. By applying perturbations to model parameters and aligning ascending points across domains, RoGA enhances the robustness of deepfake detection models to domain shifts without introducing additional regularization, outperforming state-of-the-art methods.
Abstract
Recent advancements in domain generalization for deepfake detection have attracted significant attention, with previous methods often incorporating additional modules to prevent overfitting to domain-specific patterns. However, such regularization can hinder the optimization of the empirical risk minimization (ERM) objective, ultimately degrading model performance. In this paper, we propose a novel learning objective that aligns generalization gradient updates with ERM gradient updates. The key innovation is the application of perturbations to model parameters, aligning the ascending points across domains, which specifically enhances the robustness of deepfake detection models to domain shifts. This approach effectively preserves domain-invariant features while managing domain-specific characteristics, without introducing additional regularization. Experimental results on multiple challenging deepfake detection datasets demonstrate that our gradient alignment strategy outperforms state-of-the-art domain generalization techniques, confirming the efficacy of our method. The code is available at https://github.com/Lynn0925/RoGA.