Nearly Solved? Robust Deepfake Detection Requires More than Visual Forensics

Authors: Guy Levy, Nathan Liebmann

Published: 2024-12-07 14:53:41+00:00

AI Summary

This paper investigates the robustness of state-of-the-art deepfake detectors to adversarial attacks. It finds that detectors relying on local visual features are highly susceptible, while those using semantic embeddings are more robust. A novel typographic attack targeting semantic models is also introduced.

Abstract

Deepfakes are on the rise, with increased sophistication and prevalence allowing for high-profile social engineering attacks. Detecting them in the wild is therefore important as ever, giving rise to new approaches breaking benchmark records in this task. In line with previous work, we show that recently developed state-of-the-art detectors are susceptible to classical adversarial attacks, even in a highly-realistic black-box setting, putting their usability in question. We argue that crucial 'robust features' of deepfakes are in their higher semantics, and follow that with evidence that a detector based on a semantic embedding model is less susceptible to black-box perturbation attacks. We show that large visuo-lingual models like GPT-4o can perform zero-shot deepfake detection better than current state-of-the-art methods, and introduce a novel attack based on high-level semantic manipulation. Finally, we argue that hybridising low- and high-level detectors can improve adversarial robustness, based on their complementary strengths and weaknesses.


Key findings
Local feature-based deepfake detectors are highly vulnerable to adversarial attacks, while semantic embedding models show greater robustness. A simple zero-shot approach using GPT-4o outperforms state-of-the-art methods on Celeb-DF, but is susceptible to a novel typographic attack. Hybrid models combining both low-level and high-level detection are suggested for improved robustness.
Approach
The authors evaluate existing deepfake detection methods, showing vulnerability to adversarial attacks. They propose that semantic embedding models are more robust and demonstrate this using GPT-4o for zero-shot detection. Finally, they suggest hybridising low-level and high-level detectors for improved robustness.
Datasets
FaceForensics++, Celeb-DF
Model(s)
LaDeDa (ResNet50 variant), CLIPping the Deception (CLIP with prompt tuning), GPT-4o
Author countries
Israel