Adversarial Perturbations Fool Deepfake Detectors

Authors: Apurva Gandhi, Shomik Jain

Published: 2020-03-24 00:54:02+00:00

AI Summary

This paper investigates the vulnerability of deepfake detectors to adversarial attacks. The authors demonstrate that adversarial perturbations significantly reduce the accuracy of deepfake detectors, achieving less than 27% accuracy on perturbed deepfakes compared to over 95% on unperturbed ones. They propose and evaluate defense mechanisms using Lipschitz regularization and Deep Image Prior (DIP) to improve robustness.

Abstract

This work uses adversarial perturbations to enhance deepfake images and fool common deepfake detectors. We created adversarial perturbations using the Fast Gradient Sign Method and the Carlini and Wagner L2 norm attack in both blackbox and whitebox settings. Detectors achieved over 95% accuracy on unperturbed deepfakes, but less than 27% accuracy on perturbed deepfakes. We also explore two improvements to deepfake detectors: (i) Lipschitz regularization, and (ii) Deep Image Prior (DIP). Lipschitz regularization constrains the gradient of the detector with respect to the input in order to increase robustness to input perturbations. The DIP defense removes perturbations using generative convolutional neural networks in an unsupervised manner. Regularization improved the detection of perturbed deepfakes on average, including a 10% accuracy boost in the blackbox case. The DIP defense achieved 95% accuracy on perturbed deepfakes that fooled the original detector, while retaining 98% accuracy in other cases on a 100 image subsample.


Key findings
Adversarial attacks significantly reduced deepfake detection accuracy (to less than 27%). Lipschitz regularization improved robustness but not sufficiently for practical application. The Deep Image Prior defense achieved 95% accuracy on perturbed deepfakes that fooled the original detector, while maintaining high accuracy on unperturbed images in a subsample of 100 images.
Approach
The authors use the Fast Gradient Sign Method and the Carlini and Wagner L2 norm attack to generate adversarial perturbations on deepfake images. They then evaluate two defense mechanisms: Lipschitz regularization to constrain detector gradients and Deep Image Prior (DIP) to remove perturbations using generative convolutional neural networks.
Datasets
A dataset of 10,000 images (5,000 real and 5,000 fake) created using the "Few-Shot Face Translation GAN", with real images sampled from CelebA.
Model(s)
ResNet-18 and VGG-16 architectures were used as deepfake detectors. A U-Net architecture was used in the Deep Image Prior defense.
Author countries
USA