Adversarial Magnification to Deceive Deepfake Detection through Super Resolution

Authors: Davide Alessandro Coccomini, Roberto Caldelli, Giuseppe Amato, Fabrizio Falchi, Claudio Gennaro

Published: 2024-07-02 21:17:36+00:00

AI Summary

This paper proposes a novel black-box adversarial attack against deepfake detection systems using super-resolution (SR) techniques. The attack camouflages fake images by applying SR to minimally alter their visual appearance, significantly reducing the accuracy of deepfake detectors and increasing both false negatives and false positives.

Abstract

Deepfake technology is rapidly advancing, posing significant challenges to the detection of manipulated media content. Parallel to that, some adversarial attack techniques have been developed to fool the deepfake detectors and make deepfakes even more difficult to be detected. This paper explores the application of super resolution techniques as a possible adversarial attack in deepfake detection. Through our experiments, we demonstrate that minimal changes made by these methods in the visual appearance of images can have a profound impact on the performance of deepfake detection systems. We propose a novel attack using super resolution as a quick, black-box and effective method to camouflage fake images and/or generate false alarms on pristine images. Our results indicate that the usage of super resolution can significantly impair the accuracy of deepfake detectors, thereby highlighting the vulnerability of such systems to adversarial attacks. The code to reproduce our experiments is available at: https://github.com/davide-coccomini/Adversarial-Magnification-to-Deceive-Deepfake-Detection-through-Super-Resolution


Key findings
Applying super-resolution significantly impairs the accuracy of deepfake detectors, increasing the false negative rate (up to 18%) by camouflaging fake images and increasing the false positive rate (up to 14%) by misclassifying pristine images. The visual changes introduced by the SR attack are minimal, making the attack highly effective and difficult to detect by human observers.
Approach
The authors propose an adversarial attack that leverages super-resolution to modify deepfake images. A face detector isolates faces within frames, downscales them, applies SR upscaling, and replaces the original faces. This process smooths deepfake artifacts, making them harder for detectors to identify.
Datasets
FaceForensics++ (FF++) dataset
Model(s)
ResNet50, Swin-Small, XceptionNet (for deepfake detection); EDSR (for super-resolution)
Author countries
Italy