Exploring Self-Supervised Vision Transformers for Deepfake Detection: A Comparative Analysis
Authors: Huy H. Nguyen, Junichi Yamagishi, Isao Echizen
Published: 2024-05-01 07:16:49+00:00
AI Summary
This paper compares self-supervised pre-trained Vision Transformers (ViTs) against supervised pre-trained ViTs and ConvNets for deepfake detection in images and videos. It finds that self-supervised ViTs, particularly DINOs, achieve comparable or better performance with limited data and offer explainability through attention mechanisms, making them a resource-efficient option.
Abstract
This paper investigates the effectiveness of self-supervised pre-trained vision transformers (ViTs) compared to supervised pre-trained ViTs and conventional neural networks (ConvNets) for detecting facial deepfake images and videos. It examines their potential for improved generalization and explainability, especially with limited training data. Despite the success of transformer architectures in various tasks, the deepfake detection community is hesitant to use large ViTs as feature extractors due to their perceived need for extensive data and suboptimal generalization with small datasets. This contrasts with ConvNets, which are already established as robust feature extractors. Additionally, training ViTs from scratch requires significant resources, limiting their use to large companies. Recent advancements in self-supervised learning (SSL) for ViTs, like masked autoencoders and DINOs, show adaptability across diverse tasks and semantic segmentation capabilities. By leveraging SSL ViTs for deepfake detection with modest data and partial fine-tuning, we find comparable adaptability to deepfake detection and explainability via the attention mechanism. Moreover, partial fine-tuning of ViTs is a resource-efficient option.