FDFtNet: Facing Off Fake Images using Fake Detection Fine-tuning Network

Authors: Hyeonseong Jeon, Youngoh Bang, Simon S. Woo

Published: 2020-01-05 16:04:17+00:00

AI Summary

The paper introduces FDFtNet, a lightweight neural network architecture for detecting fake images. It uses a novel Fine-Tune Transformer module combined with a pre-trained CNN and achieves state-of-the-art accuracy by fine-tuning on limited datasets.

Abstract

Creating fake images and videos such as Deepfake has become much easier these days due to the advancement in Generative Adversarial Networks (GANs). Moreover, recent research such as the few-shot learning can create highly realistic personalized fake images with only a few images. Therefore, the threat of Deepfake to be used for a variety of malicious intents such as propagating fake images and videos becomes prevalent. And detecting these machine-generated fake images has been quite challenging than ever. In this work, we propose a light-weight robust fine-tuning neural network-based classifier architecture called Fake Detection Fine-tuning Network (FDFtNet), which is capable of detecting many of the new fake face image generation models, and can be easily combined with existing image classification networks and finetuned on a few datasets. In contrast to many existing methods, our approach aims to reuse popular pre-trained models with only a few images for fine-tuning to effectively detect fake images. The core of our approach is to introduce an image-based self-attention module called Fine-Tune Transformer that uses only the attention module and the down-sampling layer. This module is added to the pre-trained model and fine-tuned on a few data to search for new sets of feature space to detect fake images. We experiment with our FDFtNet on the GANsbased dataset (Progressive Growing GAN) and Deepfake-based dataset (Deepfake and Face2Face) with a small input image resolution of 64x64 that complicates detection. Our FDFtNet achieves an overall accuracy of 90.29% in detecting fake images generated from the GANs-based dataset, outperforming the state-of-the-art.


Key findings
FDFtNet outperforms state-of-the-art methods, achieving over 90% accuracy on multiple datasets, even when fine-tuned with a limited number of images. The Fine-Tune Transformer module significantly improves detection accuracy. The model demonstrates good generalization capabilities across different types of fake image generation techniques.
Approach
FDFtNet adds a Fine-Tune Transformer (FTT) module and a MobileNet block V3 to a pre-trained CNN. The FTT uses a self-attention mechanism to extract features, and the model is fine-tuned on a small dataset of real and fake images. Data augmentation techniques like Cutout are employed to improve performance.
Datasets
CelebA, Progressive Growing GAN (PGGAN), Deepfakes, and Face2Face datasets. The authors used frames from FaceForensics for Face2Face.
Model(s)
SqueezeNet, ShallowNetV3, ResNetV2, and Xception (as baselines); FDFtNet (the proposed model combines these with the FTT and MobileNet block V3).
Author countries
South Korea