Adversarial Threats to DeepFake Detection: A Practical Perspective

Authors: Paarth Neekhara, Brian Dolhansky, Joanna Bitton, Cristian Canton Ferrer

Published: 2020-11-19 16:53:38+00:00

AI Summary

This paper investigates the vulnerability of state-of-the-art deepfake detection methods to adversarial attacks. The authors perform black-box adversarial attacks, studying the transferability of perturbations across different models and proposing techniques to improve this transferability, including the use of universal adversarial perturbations.

Abstract

Facially manipulated images and videos or DeepFakes can be used maliciously to fuel misinformation or defame individuals. Therefore, detecting DeepFakes is crucial to increase the credibility of social media platforms and other media sharing web sites. State-of-the art DeepFake detection techniques rely on neural network based classification models which are known to be vulnerable to adversarial examples. In this work, we study the vulnerabilities of state-of-the-art DeepFake detection methods from a practical stand point. We perform adversarial attacks on DeepFake detectors in a black box setting where the adversary does not have complete knowledge of the classification models. We study the extent to which adversarial perturbations transfer across different models and propose techniques to improve the transferability of adversarial examples. We also create more accessible attacks using Universal Adversarial Perturbations which pose a very feasible attack scenario since they can be easily shared amongst attackers. We perform our evaluations on the winning entries of the DeepFake Detection Challenge (DFDC) and demonstrate that they can be easily bypassed in a practical attack scenario by designing transferable and accessible adversarial attacks.


Key findings
State-of-the-art deepfake detectors are vulnerable to adversarial attacks, even in black-box scenarios. Transferable adversarial examples can be created, and universal adversarial perturbations offer easily deployable attacks requiring minimal technical expertise. These attacks can achieve high success rates in fooling the detectors.
Approach
The researchers perform adversarial attacks on deepfake detectors in a black-box setting. They explore transferability of adversarial examples across different models and propose methods to improve this transferability, also developing more accessible attacks using universal adversarial perturbations.
Datasets
DeepFake Detection Challenge (DFDC) dataset
Model(s)
EfficientNet B7, XceptionNet, EfficientNet B3 (and others from top 3 DFDC entries)
Author countries
USA, UK