Aggregating Layers for Deepfake Detection
Authors: Amir Jevnisek, Shai Avidan
Published: 2022-10-11 14:29:47+00:00
AI Summary
This paper addresses the challenge of Deepfake detection in a practical scenario where models are trained on one Deepfake algorithm but tested on others. The main contribution is an algorithm that aggregates features extracted across all layers of a backbone network to improve robustness and detection performance. This approach achieves state-of-the-art results for both Deepfake and synthetic image detection.
Abstract
The increasing popularity of facial manipulation (Deepfakes) and synthetic face creation raises the need to develop robust forgery detection solutions. Crucially, most work in this domain assume that the Deepfakes in the test set come from the same Deepfake algorithms that were used for training the network. This is not how things work in practice. Instead, we consider the case where the network is trained on one Deepfake algorithm, and tested on Deepfakes generated by another algorithm. Typically, supervised techniques follow a pipeline of visual feature extraction from a deep backbone, followed by a binary classification head. Instead, our algorithm aggregates features extracted across all layers of one backbone network to detect a fake. We evaluate our approach on two domains of interest - Deepfake detection and Synthetic image detection, and find that we achieve SOTA results.