UMCL: Unimodal-generated Multimodal Contrastive Learning for Cross-compression-rate Deepfake Detection

Authors: Ching-Yi Lai, Chih-Yu Jian, Pei-Cheng Chuang, Chia-Ming Lee, Chih-Chung Hsu, Chiou-Ting Hsu, Chia-Wen Lin

Published: 2025-11-24 10:56:22+00:00

Comment: 24-page manuscript accepted to IJCV

AI Summary

The UMCL framework proposes a novel unimodal-generated multimodal contrastive learning approach for robust cross-compression-rate deepfake detection. It transforms a single visual modality into compression-robust rPPG signals, temporal landmark dynamics, and semantic embeddings, which are explicitly aligned through affinity-driven semantic alignment (ASA) and cross-quality similarity learning (CQSL). This method achieves superior performance across various compression rates and manipulation types, setting a new benchmark for robust deepfake detection.

Abstract

In deepfake detection, the varying degrees of compression employed by social media platforms pose significant challenges for model generalization and reliability. Although existing methods have progressed from single-modal to multimodal approaches, they face critical limitations: single-modal methods struggle with feature degradation under data compression in social media streaming, while multimodal approaches require expensive data collection and labeling and suffer from inconsistent modal quality or accessibility in real-world scenarios. To address these challenges, we propose a novel Unimodal-generated Multimodal Contrastive Learning (UMCL) framework for robust cross-compression-rate (CCR) deepfake detection. In the training stage, our approach transforms a single visual modality into three complementary features: compression-robust rPPG signals, temporal landmark dynamics, and semantic embeddings from pre-trained vision-language models. These features are explicitly aligned through an affinity-driven semantic alignment (ASA) strategy, which models inter-modal relationships through affinity matrices and optimizes their consistency through contrastive learning. Subsequently, our cross-quality similarity learning (CQSL) strategy enhances feature robustness across compression rates. Extensive experiments demonstrate that our method achieves superior performance across various compression rates and manipulation types, establishing a new benchmark for robust deepfake detection. Notably, our approach maintains high detection accuracy even when individual features degrade, while providing interpretable insights into feature relationships through explicit alignment.


Key findings
The UMCL framework consistently achieves superior deepfake detection performance across various compression rates, datasets, and manipulation types, outperforming existing methods. It demonstrates exceptional robustness against severe modality degradation (rPPG sampling, landmark perturbations, adversarial text prompts), maintaining high accuracy where baselines significantly fail. The explicit alignment via ASA and robustness learning through CQSL are crucial for building compact, semantically coherent, and discriminative feature representations.
Approach
The UMCL framework generates three complementary features (rPPG signals, facial landmark dynamics, and semantic embeddings) from a single visual input. An Affinity-driven Semantic Alignment (ASA) strategy models inter-modal relationships through affinity matrices to ensure semantic consistency. A Cross-Quality Similarity Learning (CQSL) strategy further enhances feature robustness across different compression rates by aligning high-quality and low-quality rPPG features through contrastive learning.
Datasets
FaceForensics++, Celeb-DF, DFD, DFDC, VIPL
Model(s)
PhysFormer (for P-encoder), LRNet (for L-encoder), CLIP text encoder (ViT-B16 based, for T-encoder), MTCNN
Author countries
Taiwan