Impact of Benign Modifications on Discriminative Performance of Deepfake Detectors

Authors: Yuhang Lu, Evgeniy Upenik, Touradj Ebrahimi

Published: 2021-11-14 22:50:39+00:00

AI Summary

This paper proposes a framework to assess the robustness of deepfake detectors against benign image and video processing operations. It quantitatively measures the impact of common operations like compression, denoising, and resizing on a state-of-the-art deepfake detector, providing insights for designing more robust models.

Abstract

Deepfakes are becoming increasingly popular in both good faith applications such as in entertainment and maliciously intended manipulations such as in image and video forgery. Primarily motivated by the latter, a large number of deepfake detectors have been proposed recently in order to identify such content. While the performance of such detectors still need further improvements, they are often assessed in simple if not trivial scenarios. In particular, the impact of benign processing operations such as transcoding, denoising, resizing and enhancement are not sufficiently studied. This paper proposes a more rigorous and systematic framework to assess the performance of deepfake detectors in more realistic situations. It quantitatively measures how and to which extent each benign processing approach impacts a state-of-the-art deepfake detection method. By illustrating it in a popular deepfake detector, our benchmark proposes a framework to assess robustness of detectors and provides valuable insights to design more efficient deepfake detectors.


Key findings
Benign processing operations significantly impact deepfake detection performance. Compression, blurring, and noise addition all reduce accuracy, while resizing can sometimes improve it. The findings highlight the need for more robust deepfake detectors that are resilient to common image and video manipulations.
Approach
The authors evaluate the robustness of the Capsule-Forensics deepfake detector by applying various benign image and video processing operations (e.g., compression, blurring, noise addition) to the FaceForensics++ dataset and measuring the resulting changes in detection accuracy, AUC, and F1-score.
Datasets
FaceForensics++
Model(s)
Capsule-Forensics
Author countries
Switzerland