AntifakePrompt: Prompt-Tuned Vision-Language Models are Fake Image Detectors

Authors: You-Ming Chang, Chen Yeh, Wei-Chen Chiu, Ning Yu

Published: 2023-10-26 14:23:45+00:00

AI Summary

This paper introduces AntifakePrompt, a novel deepfake detection approach using prompt-tuned vision-language models (VLMs). It formulates deepfake detection as a visual question answering problem and tunes soft prompts in InstructBLIP to significantly improve detection accuracy on unseen data from diverse generative models.

Abstract

Deep generative models can create remarkably photorealistic fake images while raising concerns about misinformation and copyright infringement, known as deepfake threats. Deepfake detection technique is developed to distinguish between real and fake images, where the existing methods typically learn classifiers in the image domain or various feature domains. However, the generalizability of deepfake detection against emerging and more advanced generative models remains challenging. In this paper, being inspired by the zero-shot advantages of Vision-Language Models (VLMs), we propose a novel approach called AntifakePrompt, using VLMs (e.g., InstructBLIP) and prompt tuning techniques to improve the deepfake detection accuracy over unseen data. We formulate deepfake detection as a visual question answering problem, and tune soft prompts for InstructBLIP to answer the real/fake information of a query image. We conduct full-spectrum experiments on datasets from a diversity of 3 held-in and 20 held-out generative models, covering modern text-to-image generation, image editing and adversarial image attacks. These testing datasets provide useful benchmarks in the realm of deepfake detection for further research. Moreover, results demonstrate that (1) the deepfake detection accuracy can be significantly and consistently improved (from 71.06% to 92.11%, in average accuracy over unseen domains) using pretrained vision-language models with prompt tuning; (2) our superior performance is at less cost of training data and trainable parameters, resulting in an effective and efficient solution for deepfake detection. Code and models can be found at https://github.com/nctu-eva-lab/AntifakePrompt.


Key findings
Prompt tuning significantly and consistently improved deepfake detection accuracy (from 71.06% to 92.11% on average over unseen domains). The method achieved superior performance with fewer training data and trainable parameters compared to existing baselines. The model also demonstrated robustness against various attack strategies.
Approach
AntifakePrompt formulates deepfake detection as a visual question answering (VQA) problem. It uses a pre-trained VLM (InstructBLIP) and prompt tuning to optimize a question embedding for distinguishing real and fake images. This approach leverages the zero-shot generalization capabilities of VLMs.
Datasets
Microsoft COCO, Flickr30k, and 20 held-out datasets from various generative models (including Stable Diffusion, DALLE-2, and others) covering text-to-image generation, image editing, and adversarial attacks.
Model(s)
InstructBLIP (a Vision-Language Model)
Author countries
Taiwan, United States