MINTIME: Multi-Identity Size-Invariant Video Deepfake Detection

Authors: Davide Alessandro Coccomini, Giorgos Kordopatis Zilos, Giuseppe Amato, Roberto Caldelli, Fabrizio Falchi, Symeon Papadopoulos, Claudio Gennaro

Published: 2022-11-20 15:17:24+00:00

AI Summary

MINTIME is a novel video deepfake detection approach that addresses the limitations of existing methods by handling multiple identities and varying face sizes within a single video. It achieves this through a Spatio-Temporal TimeSformer architecture with Identity-aware Attention and novel embeddings.

Abstract

In this paper, we introduce MINTIME, a video deepfake detection approach that captures spatial and temporal anomalies and handles instances of multiple people in the same video and variations in face sizes. Previous approaches disregard such information either by using simple a-posteriori aggregation schemes, i.e., average or max operation, or using only one identity for the inference, i.e., the largest one. On the contrary, the proposed approach builds on a Spatio-Temporal TimeSformer combined with a Convolutional Neural Network backbone to capture spatio-temporal anomalies from the face sequences of multiple identities depicted in a video. This is achieved through an Identity-aware Attention mechanism that attends to each face sequence independently based on a masking operation and facilitates video-level aggregation. In addition, two novel embeddings are employed: (i) the Temporal Coherent Positional Embedding that encodes each face sequence's temporal information and (ii) the Size Embedding that encodes the size of the faces as a ratio to the video frame size. These extensions allow our system to adapt particularly well in the wild by learning how to aggregate information of multiple identities, which is usually disregarded by other methods in the literature. It achieves state-of-the-art results on the ForgeryNet dataset with an improvement of up to 14% AUC in videos containing multiple people and demonstrates ample generalization capabilities in cross-forgery and cross-dataset settings. The code is publicly available at https://github.com/davide-coccomini/MINTIME-Multi-Identity-size-iNvariant-TIMEsformer-for-Video-Deepfake-Detection.


Key findings
MINTIME achieves state-of-the-art results on the ForgeryNet dataset, showing an improvement of up to 14% AUC in videos with multiple people. It demonstrates strong generalization capabilities in cross-forgery and cross-dataset settings, exceeding the state-of-the-art by up to 22% AUC in some cases.
Approach
MINTIME uses a Spatio-Temporal TimeSformer combined with a CNN backbone to extract features from face sequences. An Identity-aware Attention mechanism independently attends to each face sequence, while Temporal Coherent Positional Embedding and Size Embedding encode temporal information and face size, respectively. This allows for effective aggregation of information from multiple identities within a single forward pass.
Datasets
ForgeryNet, DFDC (for pre-training)
Model(s)
Spatio-Temporal TimeSformer, EfficientNet-B0, XceptionNet
Author countries
Italy, Greece