Spot the Fake: Large Multimodal Model-Based Synthetic Image Detection with Artifact Explanation

Authors: Siwei Wen, Junyan Ye, Peilin Feng, Hengrui Kang, Zichen Wen, Yize Chen, Jiang Wu, Wenjun Wu, Conghui He, Weijia Li

Published: 2025-03-19 05:14:44+00:00

AI Summary

FakeVLM, a large multimodal model, is introduced for synthetic image and DeepFake detection, providing natural language explanations for image artifacts. It achieves performance comparable to expert models and uses the newly introduced FakeClue dataset containing over 100,000 images with fine-grained artifact annotations.

Abstract

With the rapid advancement of Artificial Intelligence Generated Content (AIGC) technologies, synthetic images have become increasingly prevalent in everyday life, posing new challenges for authenticity assessment and detection. Despite the effectiveness of existing methods in evaluating image authenticity and locating forgeries, these approaches often lack human interpretability and do not fully address the growing complexity of synthetic data. To tackle these challenges, we introduce FakeVLM, a specialized large multimodal model designed for both general synthetic image and DeepFake detection tasks. FakeVLM not only excels in distinguishing real from fake images but also provides clear, natural language explanations for image artifacts, enhancing interpretability. Additionally, we present FakeClue, a comprehensive dataset containing over 100,000 images across seven categories, annotated with fine-grained artifact clues in natural language. FakeVLM demonstrates performance comparable to expert models while eliminating the need for additional classifiers, making it a robust solution for synthetic data detection. Extensive evaluations across multiple datasets confirm the superiority of FakeVLM in both authenticity classification and artifact explanation tasks, setting a new benchmark for synthetic image detection. The dataset and code will be released in: https://github.com/opendatalab/FakeVLM.


Key findings
FakeVLM outperforms other large multimodal models on multiple datasets in both synthetic image detection and artifact explanation tasks. It achieves accuracy comparable to or exceeding human performance and expert models, without requiring additional classifiers. The explanatory text paradigm significantly improves out-of-distribution generalization.
Approach
FakeVLM uses a vision backbone (CLIP-ViT) to extract image features, which are then projected and combined with text embeddings of a prompt. A large language model (Vicuna-v1.5-7B) is fine-tuned on the FakeClue dataset to classify images as real or fake and generate natural language explanations for detected artifacts.
Datasets
FakeClue (100,000+ images across seven categories), LOKI, FF++, DD-VQA, DMimage
Model(s)
CLIP-ViT(L-14), Vicuna-v1.5-7B (LLaVA-v1.5 architecture)
Author countries
China, Hong Kong