Deepfake Detection Scheme Based on Vision Transformer and Distillation

Authors: Young-Jin Heo, Young-Ju Choi, Young-Woon Lee, Byung-Gyu Kim

Published: 2021-04-03 09:13:05+00:00

AI Summary

This paper proposes a deepfake detection scheme using a Vision Transformer model with a distillation methodology. The model combines CNN features and patch-based positioning, improving upon existing CNN-based approaches by reducing overfitting and false negatives, achieving higher AUC and F1 scores on the DFDC dataset.

Abstract

Deepfake is the manipulated video made with a generative deep learning technique such as Generative Adversarial Networks (GANs) or Auto Encoder that anyone can utilize. Recently, with the increase of Deepfake videos, some classifiers consisting of the convolutional neural network that can distinguish fake videos as well as deepfake datasets have been actively created. However, the previous studies based on the CNN structure have the problem of not only overfitting, but also considerable misjudging fake video as real ones. In this paper, we propose a Vision Transformer model with distillation methodology for detecting fake videos. We design that a CNN features and patch-based positioning model learns to interact with all positions to find the artifact region for solving false negative problem. Through comparative analysis on Deepfake Detection (DFDC) Dataset, we verify that the proposed scheme with patch embedding as input outperforms the state-of-the-art using the combined CNN features. Without ensemble technique, our model obtains 0.978 of AUC and 91.9 of f1 score, while previous SOTA model yields 0.972 of AUC and 90.6 of f1 score on the same condition.


Key findings
The proposed model outperforms the state-of-the-art EfficientNet-B7 model on the DFDC dataset, achieving an AUC of 0.978 and an F1 score of 91.9 without ensemble techniques. The results indicate improved robustness in detecting fake videos compared to the previous SOTA, particularly in reducing false negatives.
Approach
The authors propose a Vision Transformer architecture that incorporates CNN features and a distillation technique. Patch embeddings are combined with features extracted by EfficientNet, fed into the transformer, and refined via distillation from an EfficientNet teacher network to enhance performance and reduce overfitting.
Datasets
Deepfake Detection Challenge (DFDC) Dataset
Model(s)
Vision Transformer (ViT), EfficientNet-B7
Author countries
South Korea