2D-Malafide: Adversarial Attacks Against Face Deepfake Detection Systems

Authors: Chiara Galdi, Michele Panariello, Massimiliano Todisco, Nicholas Evans

Published: 2024-08-26 09:41:40+00:00

AI Summary

This paper introduces 2D-Malafide, a lightweight adversarial attack that uses 2D convolutional filters to deceive face deepfake detection systems. Unlike previous methods, 2D-Malafide optimizes a small number of filter coefficients, creating transferable perturbations across different face images and significantly degrading detection performance.

Abstract

We introduce 2D-Malafide, a novel and lightweight adversarial attack designed to deceive face deepfake detection systems. Building upon the concept of 1D convolutional perturbations explored in the speech domain, our method leverages 2D convolutional filters to craft perturbations which significantly degrade the performance of state-of-the-art face deepfake detectors. Unlike traditional additive noise approaches, 2D-Malafide optimises a small number of filter coefficients to generate robust adversarial perturbations which are transferable across different face images. Experiments, conducted using the FaceForensics++ dataset, demonstrate that 2D-Malafide substantially degrades detection performance in both white-box and black-box settings, with larger filter sizes having the greatest impact. Additionally, we report an explainability analysis using GradCAM which illustrates how 2D-Malafide misleads detection systems by altering the image areas used most for classification. Our findings highlight the vulnerability of current deepfake detection systems to convolutional adversarial attacks as well as the need for future work to enhance detection robustness through improved image fidelity constraints.


Key findings
2D-Malafide substantially degrades the performance of CADDM and SBI deepfake detectors in both white-box and black-box settings, particularly with larger filter sizes. GradCAM analysis reveals that the attack misleads detectors by altering image areas crucial for classification.
Approach
2D-Malafide crafts adversarial perturbations by optimizing 2D convolutional filters. These filters are applied to deepfake images to maximize the likelihood of misclassification by state-of-the-art deepfake detectors. The filter size is a key parameter, balancing attack strength and image fidelity.
Datasets
FaceForensics++ (FF++) dataset
Model(s)
CADDM and SBIs (both using EfficientNet as backbone architecture)
Author countries
France