Sharp Multiple Instance Learning for DeepFake Video Detection

Authors: Xiaodan Li, Yining Lang, Yuefeng Chen, Xiaofeng Mao, Yuan He, Shuhui Wang, Hui Xue, Quan Lu

Published: 2020-08-11 08:52:17+00:00

AI Summary

This paper introduces a novel approach to DeepFake video detection addressing the problem of partial face attacks, where only some faces in a video are manipulated. It proposes Sharp Multiple Instance Learning (S-MIL), a method that directly maps instance embeddings to bag predictions, alleviating the gradient vanishing problem present in traditional MIL methods and achieving superior performance.

Abstract

With the rapid development of facial manipulation techniques, face forgery has received considerable attention in multimedia and computer vision community due to security concerns. Existing methods are mostly designed for single-frame detection trained with precise image-level labels or for video-level prediction by only modeling the inter-frame inconsistency, leaving potential high risks for DeepFake attackers. In this paper, we introduce a new problem of partial face attack in DeepFake video, where only video-level labels are provided but not all the faces in the fake videos are manipulated. We address this problem by multiple instance learning framework, treating faces and input video as instances and bag respectively. A sharp MIL (S-MIL) is proposed which builds direct mapping from instance embeddings to bag prediction, rather than from instance embeddings to instance prediction and then to bag prediction in traditional MIL. Theoretical analysis proves that the gradient vanishing in traditional MIL is relieved in S-MIL. To generate instances that can accurately incorporate the partially manipulated faces, spatial-temporal encoded instance is designed to fully model the intra-frame and inter-frame inconsistency, which further helps to promote the detection performance. We also construct a new dataset FFPMS for partially attacked DeepFake video detection, which can benefit the evaluation of different methods at both frame and video levels. Experiments on FFPMS and the widely used DFDC dataset verify that S-MIL is superior to other counterparts for partially attacked DeepFake video detection. In addition, S-MIL can also be adapted to traditional DeepFake image detection tasks and achieve state-of-the-art performance on single-frame datasets.


Key findings
S-MIL outperforms existing frame-based and video-based methods on partially and fully attacked DeepFake video datasets. The incorporation of spatial-temporal encoding further enhances performance. S-MIL also achieves state-of-the-art results on single-frame DeepFake image detection tasks.
Approach
The authors propose Sharp Multiple Instance Learning (S-MIL) for DeepFake video detection, treating faces in videos as instances and videos as bags. S-MIL directly maps instance embeddings to bag predictions, unlike traditional MIL, mitigating gradient vanishing and improving accuracy. Spatial-temporal encoded instances are also designed to capture inconsistencies between faces.
Datasets
FaceForensics++ (FF++), Celeb-DF, Deepfake Detection Challenge (DFDC), FaceForensics Plus with Mixing samples (FFPMS)
Model(s)
XceptionNet (as a backbone), 1D CNNs for spatial-temporal encoding, S-MIL (Sharp Multiple Instance Learning)
Author countries
China