Towards More General Video-based Deepfake Detection through Facial Component Guided Adaptation for Foundation Model
Authors: Yue-Hua Han, Tai-Ming Huang, Kai-Lung Hua, Jun-Cheng Chen
Published: 2024-04-08 14:58:52+00:00
AI Summary
This paper proposes a novel video-based deepfake detection method using a side-network-based decoder with spatial and temporal modules to adapt a CLIP image encoder. Facial Component Guidance (FCG) enhances spatial learning by focusing on key facial regions, improving generalizability and efficiency.
Abstract
Generative models have enabled the creation of highly realistic facial-synthetic images, raising significant concerns due to their potential for misuse. Despite rapid advancements in the field of deepfake detection, developing efficient approaches to leverage foundation models for improved generalizability to unseen forgery samples remains challenging. To address this challenge, we propose a novel side-network-based decoder that extracts spatial and temporal cues using the CLIP image encoder for generalized video-based Deepfake detection. Additionally, we introduce Facial Component Guidance (FCG) to enhance spatial learning generalizability by encouraging the model to focus on key facial regions. By leveraging the generic features of a vision-language foundation model, our approach demonstrates promising generalizability on challenging Deepfake datasets while also exhibiting superiority in training data efficiency, parameter efficiency, and model robustness.