DeepFake-Adapter: Dual-Level Adapter for DeepFake Detection

Authors: Rui Shao, Tianxing Wu, Liqiang Nie, Ziwei Liu

Published: 2023-06-01 16:23:22+00:00

AI Summary

DeepFake-Adapter is a parameter-efficient approach for deepfake detection that leverages high-level semantics from pre-trained Vision Transformers (ViTs). It introduces lightweight dual-level adapter modules to adapt the ViT to deepfake data while keeping the backbone frozen, improving generalization to unseen or degraded samples.

Abstract

Existing deepfake detection methods fail to generalize well to unseen or degraded samples, which can be attributed to the over-fitting of low-level forgery patterns. Here we argue that high-level semantics are also indispensable recipes for generalizable forgery detection. Recently, large pre-trained Vision Transformers (ViTs) have shown promising generalization capability. In this paper, we propose the first parameter-efficient tuning approach for deepfake detection, namely DeepFake-Adapter, to effectively and efficiently adapt the generalizable high-level semantics from large pre-trained ViTs to aid deepfake detection. Given large pre-trained models but limited deepfake data, DeepFake-Adapter introduces lightweight yet dedicated dual-level adapter modules to a ViT while keeping the model backbone frozen. Specifically, to guide the adaptation process to be aware of both global and local forgery cues of deepfake data, 1) we not only insert Globally-aware Bottleneck Adapters in parallel to MLP layers of ViT, 2) but also actively cross-attend Locally-aware Spatial Adapters with features from ViT. Unlike existing deepfake detection methods merely focusing on low-level forgery patterns, the forgery detection process of our model can be regularized by generalizable high-level semantics from a pre-trained ViT and adapted by global and local low-level forgeries of deepfake data. Extensive experiments on several standard deepfake detection benchmarks validate the effectiveness of our approach. Notably, DeepFake-Adapter demonstrates a convincing advantage under cross-dataset and cross-manipulation settings. The code has been released at https://github.com/rshaojimmy/DeepFake-Adapter.


Key findings
DeepFake-Adapter significantly outperforms existing methods in cross-dataset and cross-manipulation settings, demonstrating superior generalization. The parameter-efficient design achieves state-of-the-art performance while using less than 20% of the original ViT parameters. The model also shows robustness to low-level corruptions and generalizes well to deepfakes generated by diffusion models.
Approach
The approach uses a pre-trained ViT as a backbone, adding lightweight dual-level adapter modules (Globally-aware Bottleneck Adapters and Locally-aware Spatial Adapters). These adapters focus on global and local forgery cues, respectively, allowing efficient adaptation to deepfake data while preserving the ViT's generalization capabilities.
Datasets
FaceForensics++, Celeb-DF, Deepfake Detection Challenge (DFDC), DeeperForensics-1.0
Model(s)
Vision Transformer (ViT), specifically ViT-Base and ViT-Large, with added Globally-aware Bottleneck Adapters and Locally-aware Spatial Adapters.
Author countries
China, Singapore