Evading Deepfake-Image Detectors with White- and Black-Box Attacks

Authors: Nicholas Carlini, Hany Farid

Published: 2020-04-01 17:59:59+00:00

AI Summary

This paper demonstrates the vulnerability of state-of-the-art image deepfake detectors to various white-box and black-box attacks. By applying five different attack strategies, the authors significantly reduce the detectors' accuracy (AUC) to near zero, highlighting the fragility of these classifiers.

Abstract

It is now possible to synthesize highly realistic images of people who don't exist. Such content has, for example, been implicated in the creation of fraudulent social-media profiles responsible for dis-information campaigns. Significant efforts are, therefore, being deployed to detect synthetically-generated content. One popular forensic approach trains a neural network to distinguish real from synthetic content. We show that such forensic classifiers are vulnerable to a range of attacks that reduce the classifier to near-0% accuracy. We develop five attack case studies on a state-of-the-art classifier that achieves an area under the ROC curve (AUC) of 0.95 on almost all existing image generators, when only trained on one generator. With full access to the classifier, we can flip the lowest bit of each pixel in an image to reduce the classifier's AUC to 0.0005; perturb 1% of the image area to reduce the classifier's AUC to 0.08; or add a single noise pattern in the synthesizer's latent space to reduce the classifier's AUC to 0.17. We also develop a black-box attack that, with no access to the target classifier, reduces the AUC to 0.22. These attacks reveal significant vulnerabilities of certain image-forensic classifiers.


Key findings
White-box attacks reduced the AUC from 0.95 to below 0.1. Even black-box attacks, without access to the classifier's parameters, reduced the AUC to below 0.22. These findings reveal significant vulnerabilities in existing image deepfake detection methods.
Approach
The authors employ five attack strategies, including manipulating pixel values, perturbing image areas, and adding noise to the generator's latent space. These attacks are categorized as either white-box (with full access to the classifier) or black-box (without access). The effectiveness of each attack is measured by the reduction in the Area Under the ROC Curve (AUC).
Datasets
A dataset of 94,036 images released by Wang et al. [42], consisting of real and synthetic images. The model was also trained on 1,000,000 ProGAN images.
Model(s)
ResNet-50 pre-trained on ImageNet and then trained to classify real and fake images (Wang et al. [42]). Another classifier from Frank et al. [19] is also mentioned, but the primary focus is on Wang et al.'s model.
Author countries
USA