Next-Frame Feature Prediction for Multimodal Deepfake Detection and Temporal Localization
Authors: Ashutosh Anshul, Shreyas Gopal, Deepu Rajan, Eng Siong Chng
Published: 2025-11-13 11:34:03+00:00
Comment: Under Review, Multimodal Deepfake detection
AI Summary
This paper proposes a single-stage framework for multimodal deepfake detection and temporal localization, addressing limitations of prior methods that struggle with generalization or overlook intra-modal artifacts. The approach leverages next-frame feature prediction for both uni-modal and cross-modal features, combined with a window-level attention mechanism to capture local inconsistencies. This framework demonstrates strong generalization across various manipulations and datasets, while also providing precise temporal localization capabilities.
Abstract
Recent multimodal deepfake detection methods designed for generalization conjecture that single-stage supervised training struggles to generalize across unseen manipulations and datasets. However, such approaches that target generalization require pretraining over real samples. Additionally, these methods primarily focus on detecting audio-visual inconsistencies and may overlook intra-modal artifacts causing them to fail against manipulations that preserve audio-visual alignment. To address these limitations, we propose a single-stage training framework that enhances generalization by incorporating next-frame prediction for both uni-modal and cross-modal features. Additionally, we introduce a window-level attention mechanism to capture discrepancies between predicted and actual frames, enabling the model to detect local artifacts around every frame, which is crucial for accurately classifying fully manipulated videos and effectively localizing deepfake segments in partially spoofed samples. Our model, evaluated on multiple benchmark datasets, demonstrates strong generalization and precise temporal localization.