Hybrid Transformer Network for Deepfake Detection
Authors: Sohail Ahmed Khan, Duc-Tien Dang-Nguyen
Published: 2022-08-11 13:30:42+00:00
AI Summary
This paper proposes a novel hybrid transformer network for deepfake video detection that uses early feature fusion from XceptionNet and EfficientNet-B4 CNNs. The model achieves comparable results to state-of-the-art methods on FaceForensics++ and DFDC benchmarks, even with a relatively straightforward architecture and less training data.
Abstract
Deepfake media is becoming widespread nowadays because of the easily available tools and mobile apps which can generate realistic looking deepfake videos/images without requiring any technical knowledge. With further advances in this field of technology in the near future, the quantity and quality of deepfake media is also expected to flourish, while making deepfake media a likely new practical tool to spread mis/disinformation. Because of these concerns, the deepfake media detection tools are becoming a necessity. In this study, we propose a novel hybrid transformer network utilizing early feature fusion strategy for deepfake video detection. Our model employs two different CNN networks, i.e., (1) XceptionNet and (2) EfficientNet-B4 as feature extractors. We train both feature extractors along with the transformer in an end-to-end manner on FaceForensics++, DFDC benchmarks. Our model, while having relatively straightforward architecture, achieves comparable results to other more advanced state-of-the-art approaches when evaluated on FaceForensics++ and DFDC benchmarks. Besides this, we also propose novel face cut-out augmentations, as well as random cut-out augmentations. We show that the proposed augmentations improve the detection performance of our model and reduce overfitting. In addition to that, we show that our model is capable of learning from considerably small amount of data.