CAMME: Adaptive Deepfake Image Detection with Multi-Modal Cross-Attention

Authors: Naseem Khan, Tuan Nguyen, Amine Bermak, Issa Khalil

Published: 2025-05-23 15:39:07+00:00

AI Summary

CAMME, a multi-modal deepfake image detection framework, dynamically integrates visual, textual, and frequency-domain features using cross-attention to achieve robust cross-domain generalization. Extensive experiments show CAMME significantly outperforms state-of-the-art methods and demonstrates high resilience to both natural and adversarial attacks.

Abstract

The proliferation of sophisticated AI-generated deepfakes poses critical challenges for digital media authentication and societal security. While existing detection methods perform well within specific generative domains, they exhibit significant performance degradation when applied to manipulations produced by unseen architectures--a fundamental limitation as generative technologies rapidly evolve. We propose CAMME (Cross-Attention Multi-Modal Embeddings), a framework that dynamically integrates visual, textual, and frequency-domain features through a multi-head cross-attention mechanism to establish robust cross-domain generalization. Extensive experiments demonstrate CAMME's superiority over state-of-the-art methods, yielding improvements of 12.56% on natural scenes and 13.25% on facial deepfakes. The framework demonstrates exceptional resilience, maintaining (over 91%) accuracy under natural image perturbations and achieving 89.01% and 96.14% accuracy against PGD and FGSM adversarial attacks, respectively. Our findings validate that integrating complementary modalities through cross-attention enables more effective decision boundary realignment for reliable deepfake detection across heterogeneous generative architectures.


Key findings
CAMME outperforms state-of-the-art methods by 12.56% on natural scenes and 13.25% on facial deepfakes. It maintains over 91% accuracy under natural image perturbations and achieves high accuracy against PGD and FGSM adversarial attacks (89.01% and 96.14%, respectively).
Approach
CAMME uses a multi-head cross-attention mechanism to integrate visual embeddings (OpenCLIP-ConvNextLarge), textual embeddings (CLIP's text encoder with BLIP caption generation), and frequency-domain features (DCT). This allows the model to adaptively focus on the most discriminative features for each image, improving cross-domain generalization.
Datasets
A multi-modal benchmark dataset augmented with BLIP-generated captions for 161,837 natural scene images across five generative architectures (Stable Diffusion V1.5, ADM, GLIDE, VQDM, BigGAN) and a face dataset with 100,000 images from CelebA, CelebA-HQ, Youtube-Frame, PGGAN, Glow, Face2Face, and StarGAN.
Model(s)
OpenCLIP-ConvNextLarge for visual features, CLIP's text encoder and BLIP for text embeddings, DCT for frequency-domain features, and a multi-head cross-attention transformer architecture for integrating modalities and classification.
Author countries
Qatar