Adversarial Magnification to Deceive Deepfake Detection through Super Resolution
Authors: Davide Alessandro Coccomini, Roberto Caldelli, Giuseppe Amato, Fabrizio Falchi, Claudio Gennaro
Published: 2024-07-02 21:17:36+00:00
AI Summary
This paper investigates the use of super-resolution (SR) techniques as a novel black-box adversarial attack to deceive deepfake detection systems. The authors demonstrate that minimal changes made by SR methods in the visual appearance of images can significantly impair the performance of deepfake detectors. The proposed attack effectively camouflages fake images (increasing false negatives) and can also generate false alarms on pristine images (increasing false positives), highlighting system vulnerabilities.
Abstract
Deepfake technology is rapidly advancing, posing significant challenges to the detection of manipulated media content. Parallel to that, some adversarial attack techniques have been developed to fool the deepfake detectors and make deepfakes even more difficult to be detected. This paper explores the application of super resolution techniques as a possible adversarial attack in deepfake detection. Through our experiments, we demonstrate that minimal changes made by these methods in the visual appearance of images can have a profound impact on the performance of deepfake detection systems. We propose a novel attack using super resolution as a quick, black-box and effective method to camouflage fake images and/or generate false alarms on pristine images. Our results indicate that the usage of super resolution can significantly impair the accuracy of deepfake detectors, thereby highlighting the vulnerability of such systems to adversarial attacks. The code to reproduce our experiments is available at: https://github.com/davide-coccomini/Adversarial-Magnification-to-Deceive-Deepfake-Detection-through-Super-Resolution