Exploring Self-Supervised Vision Transformers for Deepfake Detection: A Comparative Analysis

Authors: Huy H. Nguyen, Junichi Yamagishi, Isao Echizen

Published: 2024-05-01 07:16:49+00:00

AI Summary

This paper compares self-supervised pre-trained Vision Transformers (ViTs) against supervised pre-trained ViTs and ConvNets for deepfake detection in images and videos. It finds that self-supervised ViTs, particularly DINOs, achieve comparable or better performance with limited data and offer explainability through attention mechanisms, making them a resource-efficient option.

Abstract

This paper investigates the effectiveness of self-supervised pre-trained vision transformers (ViTs) compared to supervised pre-trained ViTs and conventional neural networks (ConvNets) for detecting facial deepfake images and videos. It examines their potential for improved generalization and explainability, especially with limited training data. Despite the success of transformer architectures in various tasks, the deepfake detection community is hesitant to use large ViTs as feature extractors due to their perceived need for extensive data and suboptimal generalization with small datasets. This contrasts with ConvNets, which are already established as robust feature extractors. Additionally, training ViTs from scratch requires significant resources, limiting their use to large companies. Recent advancements in self-supervised learning (SSL) for ViTs, like masked autoencoders and DINOs, show adaptability across diverse tasks and semantic segmentation capabilities. By leveraging SSL ViTs for deepfake detection with modest data and partial fine-tuning, we find comparable adaptability to deepfake detection and explainability via the attention mechanism. Moreover, partial fine-tuning of ViTs is a resource-efficient option.


Key findings
Self-supervised pre-trained ViTs, especially DINOs, outperform supervised ViTs and ConvNets in deepfake detection, even with limited training data. Partial fine-tuning of the final ViT blocks improves performance and enables explainability via attention mechanisms. The best performance was achieved using DINOv2 with partial fine-tuning.
Approach
The authors explore two approaches: 1) using frozen self-supervised pre-trained ViT backbones as multi-level feature extractors, and 2) partially fine-tuning the final transformer blocks. Simple classifiers are used to focus on the backbone's feature extraction capabilities.
Datasets
VidTIMIT, VoxCeleb2, FaceForensics++, Google DFD, Deepfake Detection Challenge Dataset (DFDC), Celeb-DF, DeepfakeTIMIT, YouTube-DF, images generated by StarGAN, StarGAN-v2, RelGAN, ProGAN, StyleGAN, StyleGAN2, and a dataset from T, ˆant,aru et al. containing diffusion-generated images.
Model(s)
EfficientNetV2 Large, DeiT III L/16-LayerScale, EVA-02-CLIP-L/14, MAE ViT-L/16, DINO (various versions and sizes), DINOv2 (various versions and sizes).
Author countries
Japan