Disrupting Deepfakes: Adversarial Attacks Against Conditional Image Translation Networks and Facial Manipulation Systems

Authors: Nataniel Ruiz, Sarah Adel Bargal, Stan Sclaroff

Published: 2020-03-03 01:18:16+00:00

AI Summary

This paper introduces the novel problem of disrupting deepfakes by generating adversarial attacks against conditional image translation networks. The authors propose class-transferable adversarial attacks that generalize across different attributes and adversarial training for GANs to improve robustness. A spread-spectrum attack is also presented to evade blur defenses.

Abstract

Face modification systems using deep learning have become increasingly powerful and accessible. Given images of a person's face, such systems can generate new images of that same person under different expressions and poses. Some systems can also modify targeted attributes such as hair color or age. This type of manipulated images and video have been coined Deepfakes. In order to prevent a malicious user from generating modified images of a person without their consent we tackle the new problem of generating adversarial attacks against such image translation systems, which disrupt the resulting output image. We call this problem disrupting deepfakes. Most image translation architectures are generative models conditioned on an attribute (e.g. put a smile on this person's face). We are first to propose and successfully apply (1) class transferable adversarial attacks that generalize to different classes, which means that the attacker does not need to have knowledge about the conditioning class, and (2) adversarial training for generative adversarial networks (GANs) as a first step towards robust image translation networks. Finally, in gray-box scenarios, blurring can mount a successful defense against disruption. We present a spread-spectrum adversarial attack, which evades blur defenses. Our open-source code can be found at https://github.com/natanielruiz/disrupting-deepfakes.


Key findings
The proposed attacks successfully disrupt various image translation architectures, including GANimation, StarGAN, pix2pixHD, and CycleGAN. Class-transferable attacks effectively generalize across different attributes. Adversarial training improves robustness, but a spread-spectrum attack can overcome blur-based defenses.
Approach
The authors adapt traditional adversarial attacks (FGSM, I-FGSM, PGD) to disrupt the output of image translation networks. They propose class-transferable attacks to handle conditional generation and adversarial training for GANs to enhance robustness against attacks. A spread-spectrum attack is introduced to bypass blur-based defenses.
Datasets
CelebA, Cityscapes
Model(s)
GANimation, StarGAN, pix2pixHD, CycleGAN
Author countries
USA