Detecting Deepfakes with Multivariate Soft Blending and CLIP-based Image-Text Alignment
Authors: Jingwei Li, Jiaxin Tong, Pengfei Wu
Published: 2026-02-14 09:53:35+00:00
AI Summary
This paper introduces MSBA-CLIP, a novel framework for deepfake detection designed to improve accuracy and generalization against diverse forgery techniques. It leverages CLIP's multimodal alignment, a Multivariate and Soft Blending Augmentation (MSBA) strategy to synthesize complex forged images, and a Multivariate Forgery Intensity Estimation (MFIE) module for explicit guidance. The approach aims to learn generalizable patterns from subtle forgery traces, outperforming existing methods in both in-domain and cross-domain evaluations.
Abstract
The proliferation of highly realistic facial forgeries necessitates robust detection methods. However, existing approaches often suffer from limited accuracy and poor generalization due to significant distribution shifts among samples generated by diverse forgery techniques. To address these challenges, we propose a novel Multivariate and Soft Blending Augmentation with CLIP-guided Forgery Intensity Estimation (MSBA-CLIP) framework. Our method leverages the multimodal alignment capabilities of CLIP to capture subtle forgery traces. We introduce a Multivariate and Soft Blending Augmentation (MSBA) strategy that synthesizes images by blending forgeries from multiple methods with random weights, forcing the model to learn generalizable patterns. Furthermore, a dedicated Multivariate Forgery Intensity Estimation (MFIE) module is designed to explicitly guide the model in learning features related to varied forgery modes and intensities. Extensive experiments demonstrate state-of-the-art performance. On in-domain tests, our method improves Accuracy and AUC by 3.32\\% and 4.02\\%, respectively, over the best baseline. In cross-domain evaluations across five datasets, it achieves an average AUC gain of 3.27\\%. Ablation studies confirm the efficacy of both proposed components. While the reliance on a large vision-language model entails higher computational cost, our work presents a significant step towards more generalizable and robust deepfake detection.