CAST: Cross-Attentive Spatio-Temporal feature fusion for Deepfake detection

Authors: Aryan Thakre, Omkar Nagwekar, Vedang Talekar, Aparna Santra Biswas

Published: 2025-06-26 18:51:17+00:00

AI Summary

The paper proposes CAST, a deepfake detection model using cross-attention to fuse spatial and temporal features extracted from a CNN-Transformer architecture. This integrated approach allows temporal features to attend to relevant spatial regions, improving detection of subtle, time-evolving artifacts, leading to superior performance compared to methods with independent processing of spatial and temporal features.

Abstract

Deepfakes have emerged as a significant threat to digital media authenticity, increasing the need for advanced detection techniques that can identify subtle and time-dependent manipulations. CNNs are effective at capturing spatial artifacts, and Transformers excel at modeling temporal inconsistencies. However, many existing CNN-Transformer models process spatial and temporal features independently. In particular, attention-based methods often use separate attention mechanisms for spatial and temporal features and combine them using naive approaches like averaging, addition, or concatenation, which limits the depth of spatio-temporal interaction. To address this challenge, we propose a unified CAST model that leverages cross-attention to effectively fuse spatial and temporal features in a more integrated manner. Our approach allows temporal features to dynamically attend to relevant spatial regions, enhancing the model's ability to detect fine-grained, time-evolving artifacts such as flickering eyes or warped lips. This design enables more precise localization and deeper contextual understanding, leading to improved performance across diverse and challenging scenarios. We evaluate the performance of our model using the FaceForensics++, Celeb-DF, and DeepfakeDetection datasets in both intra- and cross-dataset settings to affirm the superiority of our approach. Our model achieves strong performance with an AUC of 99.49 percent and an accuracy of 97.57 percent in intra-dataset evaluations. In cross-dataset testing, it demonstrates impressive generalization by achieving a 93.31 percent AUC on the unseen DeepfakeDetection dataset. These results highlight the effectiveness of cross-attention-based feature fusion in enhancing the robustness of deepfake video detection.


Key findings
CAST achieved strong performance, with an AUC of 99.49% and accuracy of 97.57% in intra-dataset evaluations on FaceForensics++. In cross-dataset testing, it demonstrated impressive generalization, achieving a 93.31% AUC on the DeepfakeDetection dataset. Ablation studies confirmed the importance of the cross-attention mechanism for improved performance.
Approach
CAST uses a CNN backbone for spatial feature extraction and a Transformer for temporal feature encoding. A cross-attention mechanism dynamically fuses these features, allowing temporal tokens to attend to relevant spatial regions for improved detection of subtle time-dependent artifacts.
Datasets
FaceForensics++, Celeb-DF (v2), DeepfakeDetection
Model(s)
EfficientNet (B0 and B5), Transformer, XceptionNet (in ablation study)
Author countries
India