AuthGuard: Generalizable Deepfake Detection via Language Guidance

Authors: Guangyu Shen, Zhihua Li, Xiang Xu, Tianchen Zhao, Zheng Zhang, Dongsheng An, Zhuowen Tu, Yifan Xing, Qin Zhang

Published: 2025-06-04 22:50:07+00:00

AI Summary

AuthGuard improves deepfake detection generalization by integrating language guidance with statistical cues. It trains a vision encoder using image-text contrastive learning and incorporates data uncertainty learning to mitigate noise, achieving state-of-the-art accuracy on several datasets.

Abstract

Existing deepfake detection techniques struggle to keep-up with the ever-evolving novel, unseen forgeries methods. This limitation stems from their reliance on statistical artifacts learned during training, which are often tied to specific generation processes that may not be representative of samples from new, unseen deepfake generation methods encountered at test time. We propose that incorporating language guidance can improve deepfake detection generalization by integrating human-like commonsense reasoning -- such as recognizing logical inconsistencies and perceptual anomalies -- alongside statistical cues. To achieve this, we train an expert deepfake vision encoder by combining discriminative classification with image-text contrastive learning, where the text is generated by generalist MLLMs using few-shot prompting. This allows the encoder to extract both language-describable, commonsense deepfake artifacts and statistical forgery artifacts from pixel-level distributions. To further enhance robustness, we integrate data uncertainty learning into vision-language contrastive learning, mitigating noise in image-text supervision. Our expert vision encoder seamlessly interfaces with an LLM, further enabling more generalized and interpretable deepfake detection while also boosting accuracy. The resulting framework, AuthGuard, achieves state-of-the-art deepfake detection accuracy in both in-distribution and out-of-distribution settings, achieving AUC gains of 6.15% on the DFDC dataset and 16.68% on the DF40 dataset. Additionally, AuthGuard significantly enhances deepfake reasoning, improving performance by 24.69% on the DDVQA dataset.


Key findings
AuthGuard achieved state-of-the-art results, with AUC gains of 6.15% on DFDC and 16.68% on DF40. It also significantly improved deepfake reasoning performance by 24.69% on the DDVQA dataset, demonstrating enhanced generalization and interpretability.
Approach
AuthGuard combines discriminative classification with image-text contrastive learning, using MLLMs to generate text descriptions of deepfake artifacts. An adaptive aggregation mechanism balances statistical and commonsense artifacts, and the resulting vision encoder is integrated with an LLM for reasoning.
Datasets
FF++, DFDC, DF40, DD-VQA
Model(s)
ViT-L/14 (vision), RoBERTa (text), Vicuna-7B (LLM)
Author countries
USA, China