CapsFake: A Multimodal Capsule Network for Detecting Instruction-Guided Deepfakes

Authors: Tuan Nguyen, Naseem Khan, Issa Khalil

Published: 2025-04-27 12:31:47+00:00

AI Summary

CapsFake, a novel multimodal capsule network, is proposed for detecting instruction-guided deepfake image edits. It integrates visual, textual, and frequency-domain features using a competitive routing mechanism to identify manipulated regions with high precision and robustness against various attacks.

Abstract

The rapid evolution of deepfake technology, particularly in instruction-guided image editing, threatens the integrity of digital images by enabling subtle, context-aware manipulations. Generated conditionally from real images and textual prompts, these edits are often imperceptible to both humans and existing detection systems, revealing significant limitations in current defenses. We propose a novel multimodal capsule network, CapsFake, designed to detect such deepfake image edits by integrating low-level capsules from visual, textual, and frequency-domain modalities. High-level capsules, predicted through a competitive routing mechanism, dynamically aggregate local features to identify manipulated regions with precision. Evaluated on diverse datasets, including MagicBrush, Unsplash Edits, Open Images Edits, and Multi-turn Edits, CapsFake outperforms state-of-the-art methods by up to 20% in detection accuracy. Ablation studies validate its robustness, achieving detection rates above 94% under natural perturbations and 96% against adversarial attacks, with excellent generalization to unseen editing scenarios. This approach establishes a powerful framework for countering sophisticated image manipulations.


Key findings
CapsFake outperforms state-of-the-art methods by up to 20% in detection accuracy. It achieves high detection rates (above 94%) under natural perturbations and (96%) against adversarial attacks. The model shows strong generalization to unseen editing scenarios.
Approach
CapsFake uses a multimodal capsule network to detect deepfakes. It integrates low-level capsules from visual, textual, and frequency-domain modalities. High-level capsules, predicted through a competitive routing mechanism, aggregate features to identify manipulated regions.
Datasets
MagicBrush, Unsplash Edits, Open Images Edits, Multi-turn Edits
Model(s)
Multimodal Capsule Network (CapsFake), OpenCLIP-ConvNextLarge, BLIP, Discrete Cosine Transform (DCT)
Author countries
Qatar