Detecting Deepfakes with Multivariate Soft Blending and CLIP-based Image-Text Alignment

Authors: Jingwei Li, Jiaxin Tong, Pengfei Wu

Published: 2026-02-14 09:53:35+00:00

AI Summary

This paper introduces MSBA-CLIP, a novel framework for deepfake detection designed to improve accuracy and generalization against diverse forgery techniques. It leverages CLIP's multimodal alignment, a Multivariate and Soft Blending Augmentation (MSBA) strategy to synthesize complex forged images, and a Multivariate Forgery Intensity Estimation (MFIE) module for explicit guidance. The approach aims to learn generalizable patterns from subtle forgery traces, outperforming existing methods in both in-domain and cross-domain evaluations.

Abstract

The proliferation of highly realistic facial forgeries necessitates robust detection methods. However, existing approaches often suffer from limited accuracy and poor generalization due to significant distribution shifts among samples generated by diverse forgery techniques. To address these challenges, we propose a novel Multivariate and Soft Blending Augmentation with CLIP-guided Forgery Intensity Estimation (MSBA-CLIP) framework. Our method leverages the multimodal alignment capabilities of CLIP to capture subtle forgery traces. We introduce a Multivariate and Soft Blending Augmentation (MSBA) strategy that synthesizes images by blending forgeries from multiple methods with random weights, forcing the model to learn generalizable patterns. Furthermore, a dedicated Multivariate Forgery Intensity Estimation (MFIE) module is designed to explicitly guide the model in learning features related to varied forgery modes and intensities. Extensive experiments demonstrate state-of-the-art performance. On in-domain tests, our method improves Accuracy and AUC by 3.32\\% and 4.02\\%, respectively, over the best baseline. In cross-domain evaluations across five datasets, it achieves an average AUC gain of 3.27\\%. Ablation studies confirm the efficacy of both proposed components. While the reliance on a large vision-language model entails higher computational cost, our work presents a significant step towards more generalizable and robust deepfake detection.


Key findings
The MSBA-CLIP method achieved 100% Accuracy and AUC on in-domain FF++ tests (C23 and C40). In cross-domain evaluations across five datasets, it demonstrated an average AUC gain of 3.27% over the best baseline, with a significant improvement of 9.73% on the DFD dataset. Ablation studies confirmed that both MSBA and MFIE modules are crucial for enhancing generalization and robustness.
Approach
The MSBA-CLIP framework builds upon a CLIP-ViT model, integrating text prompts to guide visual feature extraction. It employs a Multivariate and Soft Blending Augmentation (MSBA) strategy that creates synthetic training samples by blending multiple forgery methods with random weights. A Multivariate Forgery Intensity Estimation (MFIE) module is also introduced to predict per-pixel forgery intensity and blending weights, all optimized through a multi-task learning objective.
Datasets
FaceForensics++ (FF++), Celeb-DF (CDF) v2, DeepFake Detection Challenge (DFDC) Preview, DeepFake Detection Challenge (DFDC), DeepFake Detection (DFD), DeeperForensics-1.0 (DFo)
Model(s)
CLIP-ViT/B-16, Multimodal Interaction Projection (MIP) layer, Multivariate and Soft Blending Augmentation (MSBA) strategy, Multivariate Forgery Intensity Estimation (MFIE) module
Author countries
China