OGAN: Disrupting Deepfakes with an Adversarial Attack that Survives Training

Authors: Eran Segalis, Eran Galili

Published: 2020-06-17 17:18:29+00:00

AI Summary

This paper introduces a novel class of adversarial attacks, called training-resistant attacks, designed to disrupt face-swapping autoencoders used in creating deepfakes. The proposed Oscillating GAN (OGAN) attack introduces spatial-temporal distortions that survive even when the adversarial images are included in the training set of the autoencoder.

Abstract

Recent advances in autoencoders and generative models have given rise to effective video forgery methods, used for generating so-called deepfakes. Mitigation research is mostly focused on post-factum deepfake detection and not on prevention. We complement these efforts by introducing a novel class of adversarial attacks---training-resistant attacks---which can disrupt face-swapping autoencoders whether or not its adversarial images have been included in the training set of said autoencoders. We propose the Oscillating GAN (OGAN) attack, a novel attack optimized to be training-resistant, which introduces spatial-temporal distortions to the output of face-swapping autoencoders. To implement OGAN, we construct a bilevel optimization problem, where we train a generator and a face-swapping model instance against each other. Specifically, we pair each input image with a target distortion, and feed them into a generator that produces an adversarial image. This image will exhibit the distortion when a face-swapping autoencoder is applied to it. We solve the optimization problem by training the generator and the face-swapping model simultaneously using an iterative process of alternating optimization. Next, we analyze the previously published Distorting Attack and show it is training-resistant, though it is outperformed by our suggested OGAN. Finally, we validate both attacks using a popular implementation of FaceSwap, and show that they transfer across different target models and target faces, including faces the adversarial attacks were not trained on. More broadly, these results demonstrate the existence of training-resistant adversarial attacks, potentially applicable to a wide range of domains.


Key findings
OGAN demonstrates the existence of training-resistant adversarial attacks. OGAN outperforms the previously published Distorting Attack, especially when the adversarial samples are included in the training data of the target face-swapping model. These attacks transfer across different target models and faces, even those unseen during training.
Approach
OGAN uses a bilevel optimization problem to train a generator and a face-swapping model against each other. The generator produces adversarial images with target distortions that persist after the face-swapping autoencoder is applied. This is solved iteratively using alternating optimization.
Datasets
UNKNOWN
Model(s)
FaceSwap (a popular face-swapping autoencoder implementation), Oscillating GAN (OGAN), Distorting Attack
Author countries
UNKNOWN