Understanding and Improving Training-Free AI-Generated Image Detections with Vision Foundation Models

Authors: Chung-Ting Tsai, Ching-Yun Ko, I-Hsin Chung, Yu-Chiang Frank Wang, Pin-Yu Chen

Published: 2024-11-28 13:04:45+00:00

AI Summary

This paper investigates training-free AI-generated image detection, improving upon existing methods by analyzing the impact of model backbones, perturbation types, and datasets. It introduces Contrastive Blur and MINDER, a minimum distance detector, to enhance performance and mitigate noise-type bias, achieving state-of-the-art results for training-free methods.

Abstract

The rapid advancement of generative models has introduced serious risks, including deepfake techniques for facial synthesis and editing. Traditional approaches rely on training classifiers and enhancing generalizability through various feature extraction techniques. Meanwhile, training-free detection methods address issues like limited data and overfitting by directly leveraging statistical properties from vision foundation models to distinguish between real and fake images. The current leading training-free approach, RIGID, utilizes DINOv2 sensitivity to perturbations in image space for detecting fake images, with fake image embeddings exhibiting greater sensitivity than those of real images. This observation prompts us to investigate how detection performance varies across model backbones, perturbation types, and datasets. Our experiments reveal that detection performance is closely linked to model robustness, with self-supervised (SSL) models providing more reliable representations. While Gaussian noise effectively detects general objects, it performs worse on facial images, whereas Gaussian blur is more effective due to potential frequency artifacts. To further improve detection, we introduce Contrastive Blur, which enhances performance on facial images, and MINDER (MINimum distance DetEctoR), which addresses noise type bias, balancing performance across domains. Beyond performance gains, our work offers valuable insights for both the generative and detection communities, contributing to a deeper understanding of model robustness property utilized for deepfake detection.


Key findings
Self-supervised learning models provide more robust representations for deepfake detection. Gaussian blur proves more effective than Gaussian noise for facial images due to frequency artifacts. MINDER, which uses the minimum distance across different perturbation types, achieves the highest overall performance among training-free methods and is comparable to training-based approaches.
Approach
The authors improve a training-free deepfake detection method (RIGID) by exploring different vision foundation models and perturbation types (noise and blur). They introduce Contrastive Blur, comparing blurred and sharpened image embeddings, and MINDER, which selects the minimum distance across different noise types to balance performance across datasets.
Datasets
DF40 (facial images from various sources including Celeb-DF, FFHQ, CelebA and generated by methods like HeyGen, DDIM, DiT, Midjourney-6, PixArt-α, Stable Diffusion v2.1, SiT, StyleGAN2, StyleGAN3, StyleGAN-XL, VQGAN, Whichisreal, CollabDiff, e4e, StarGAN2, and StyleCLIP), GenImage (general images from ImageNet and generated by ADM, BigGAN, Glide, Midjourney, Stable Diffusion v1.4 and v1.5, VQDM, and Wukong)
Model(s)
DINOv2 (various sizes: base, large, giant), iBOT, DINO, ViT, CLIP, Nomic Embed Vision v1.5
Author countries
Taiwan, USA