FReTAL: Generalizing Deepfake Detection using Knowledge Distillation and Representation Learning

Authors: Minha Kim, Shahroz Tariq, Simon S. Woo

Published: 2021-05-28 06:54:10+00:00

AI Summary

This paper introduces FReTAL, a transfer learning-based deepfake detection method using knowledge distillation and representation learning to adapt to new deepfake datasets while minimizing catastrophic forgetting. FReTAL enables a student model to quickly adapt to new deepfakes by distilling knowledge from a pre-trained teacher model without using source domain data during adaptation, achieving up to 86.97% accuracy on low-quality deepfakes.

Abstract

As GAN-based video and image manipulation technologies become more sophisticated and easily accessible, there is an urgent need for effective deepfake detection technologies. Moreover, various deepfake generation techniques have emerged over the past few years. While many deepfake detection methods have been proposed, their performance suffers from new types of deepfake methods on which they are not sufficiently trained. To detect new types of deepfakes, the model should learn from additional data without losing its prior knowledge about deepfakes (catastrophic forgetting), especially when new deepfakes are significantly different. In this work, we employ the Representation Learning (ReL) and Knowledge Distillation (KD) paradigms to introduce a transfer learning-based Feature Representation Transfer Adaptation Learning (FReTAL) method. We use FReTAL to perform domain adaptation tasks on new deepfake datasets while minimizing catastrophic forgetting. Our student model can quickly adapt to new types of deepfake by distilling knowledge from a pre-trained teacher model and applying transfer learning without using source domain data during domain adaptation. Through experiments on FaceForensics++ datasets, we demonstrate that FReTAL outperforms all baselines on the domain adaptation task with up to 86.97% accuracy on low-quality deepfakes.


Key findings
FReTAL outperforms baseline methods on domain adaptation tasks, particularly for low-quality deepfakes. The combination of knowledge distillation and feature-based representation learning effectively prevents catastrophic forgetting and improves adaptation across different deepfake domains. Achieved up to 86.97% accuracy on low-quality deepfakes in the transfer learning setting.
Approach
FReTAL uses a teacher-student network. A teacher model is pre-trained on a source deepfake dataset. The student model, initialized with the teacher's weights, is then trained on a target dataset using knowledge distillation and a novel feature-based representation learning loss function to minimize catastrophic forgetting without needing source data.
Datasets
FaceForensics++ (DeepFake, Face2Face, FaceSwap, NeuralTextures datasets), Pristine videos from FaceForensics++ used as real videos.
Model(s)
Xception (as backbone model), CNN+LSTM, DenseNet with bidirectional RNN, ShallowNet. These were used as baselines and for comparison with FReTAL.
Author countries
South Korea