FakeShield: Explainable Image Forgery Detection and Localization via Multi-modal Large Language Models

Authors: Zhipei Xu, Xuanyu Zhang, Runyi Li, Zecheng Tang, Qing Huang, Jian Zhang

Published: 2024-10-03 17:59:34+00:00

AI Summary

FakeShield is a multi-modal explainable image forgery detection and localization framework that addresses the black-box nature and limited generalization of existing methods. It leverages GPT-4 to enhance datasets and incorporates a Domain Tag-guided Explainable Forgery Detection Module and a Multi-modal Forgery Localization Module for improved performance and explainability.

Abstract

The rapid development of generative AI is a double-edged sword, which not only facilitates content creation but also makes image manipulation easier and more difficult to detect. Although current image forgery detection and localization (IFDL) methods are generally effective, they tend to face two challenges: textbf{1)} black-box nature with unknown detection principle, textbf{2)} limited generalization across diverse tampering methods (e.g., Photoshop, DeepFake, AIGC-Editing). To address these issues, we propose the explainable IFDL task and design FakeShield, a multi-modal framework capable of evaluating image authenticity, generating tampered region masks, and providing a judgment basis based on pixel-level and image-level tampering clues. Additionally, we leverage GPT-4o to enhance existing IFDL datasets, creating the Multi-Modal Tamper Description dataSet (MMTD-Set) for training FakeShield's tampering analysis capabilities. Meanwhile, we incorporate a Domain Tag-guided Explainable Forgery Detection Module (DTE-FDM) and a Multi-modal Forgery Localization Module (MFLM) to address various types of tamper detection interpretation and achieve forgery localization guided by detailed textual descriptions. Extensive experiments demonstrate that FakeShield effectively detects and localizes various tampering techniques, offering an explainable and superior solution compared to previous IFDL methods. The code is available at https://github.com/zhipeixu/FakeShield.


Key findings
FakeShield outperforms existing methods in image forgery detection and localization across diverse tampering techniques (Photoshop, DeepFake, AIGC-Editing). It also demonstrates superior explainability, providing detailed justifications for its predictions. The model shows robustness to common image degradations.
Approach
FakeShield uses a multi-modal approach, combining image and text data. It employs GPT-4 to generate descriptions of tampered images and their masks, creating a new dataset (MMTD-Set). A Domain Tag-guided Explainable Forgery Detection Module and a Multi-modal Forgery Localization Module are used for detection and localization, respectively.
Datasets
MMTD-Set (created using CASIAv2, FFHQ, FaceApp, COCO, and self-constructed data), CASIA1+, Columbia, IMD2020, Coverage, DSO, Korus, DFFD, Seq-DeepFake
Model(s)
Multi-modal Large Language Model (M-LLM, fine-tuned with LoRA), Segment Anything Model (SAM, fine-tuned with LoRA), GPT-4o
Author countries
China