Next-Frame Feature Prediction for Multimodal Deepfake Detection and Temporal Localization

Authors: Ashutosh Anshul, Shreyas Gopal, Deepu Rajan, Eng Siong Chng

Published: 2025-11-13 11:34:03+00:00

Comment: Under Review, Multimodal Deepfake detection

AI Summary

This paper proposes a single-stage framework for multimodal deepfake detection and temporal localization, addressing limitations of prior methods that struggle with generalization or overlook intra-modal artifacts. The approach leverages next-frame feature prediction for both uni-modal and cross-modal features, combined with a window-level attention mechanism to capture local inconsistencies. This framework demonstrates strong generalization across various manipulations and datasets, while also providing precise temporal localization capabilities.

Abstract

Recent multimodal deepfake detection methods designed for generalization conjecture that single-stage supervised training struggles to generalize across unseen manipulations and datasets. However, such approaches that target generalization require pretraining over real samples. Additionally, these methods primarily focus on detecting audio-visual inconsistencies and may overlook intra-modal artifacts causing them to fail against manipulations that preserve audio-visual alignment. To address these limitations, we propose a single-stage training framework that enhances generalization by incorporating next-frame prediction for both uni-modal and cross-modal features. Additionally, we introduce a window-level attention mechanism to capture discrepancies between predicted and actual frames, enabling the model to detect local artifacts around every frame, which is crucial for accurately classifying fully manipulated videos and effectively localizing deepfake segments in partially spoofed samples. Our model, evaluated on multiple benchmark datasets, demonstrates strong generalization and precise temporal localization.


Key findings
The model achieved strong generalization capabilities across unseen manipulations and datasets (e.g., perfect scores on KoDF, 97.65 AP on CREMA) despite its single-stage design. It also set a new state-of-the-art for temporal deepfake localization on the LAV-DF dataset, showing significant gains at high IoU thresholds. The masked-prediction approach enhanced interpretability by visually highlighting manipulated modalities and temporal segments.
Approach
The problem is solved using a single-stage training framework that incorporates a Masked-Prediction Feature Extraction Module. This module predicts next-frame features for both uni-modal (audio, visual) and cross-modal streams, and then captures discrepancies between predicted and actual features using local window-based convolutional cross-attention. A frame-level contrastive loss guides the learning, and separate prediction heads are used for deepfake classification or temporal localization.
Datasets
FakeAVCeleb [65], VoxCeleb2 [66], KoDF [67], LAV-DF [59, 60], CREMA [75]
Model(s)
AV-HuBERT's ResNet-18 visual encoder, ViT audio encoder, Causal Transformer Encoder/Decoder, Convolutional Cross-Attention, UMMAFormer [11] (adapted for regression head)
Author countries
Singapore