Leveraging Pre-Trained Visual Models for AI-Generated Video Detection

Authors: Keerthi Veeramachaneni, Praveen Tirupattur, Amrit Singh Bedi, Mubarak Shah

Published: 2025-07-17 15:36:39+00:00

AI Summary

This paper proposes a novel approach for detecting AI-generated videos by leveraging features extracted from pre-trained visual models. This method achieves high detection accuracy (above 90%) without extensive model training, using features that inherently distinguish real from generated content.

Abstract

Recent advances in Generative AI (GenAI) have led to significant improvements in the quality of generated visual content. As AI-generated visual content becomes increasingly indistinguishable from real content, the challenge of detecting the generated content becomes critical in combating misinformation, ensuring privacy, and preventing security threats. Although there has been substantial progress in detecting AI-generated images, current methods for video detection are largely focused on deepfakes, which primarily involve human faces. However, the field of video generation has advanced beyond DeepFakes, creating an urgent need for methods capable of detecting AI-generated videos with generic content. To address this gap, we propose a novel approach that leverages pre-trained visual models to distinguish between real and generated videos. The features extracted from these pre-trained models, which have been trained on extensive real visual content, contain inherent signals that can help distinguish real from generated videos. Using these extracted features, we achieve high detection performance without requiring additional model training, and we further improve performance by training a simple linear classification layer on top of the extracted features. We validated our method on a dataset we compiled (VID-AID), which includes around 10,000 AI-generated videos produced by 9 different text-to-video models, along with 4,000 real videos, totaling over 7 hours of video content. Our evaluation shows that our approach achieves high detection accuracy, above 90% on average, underscoring its effectiveness. Upon acceptance, we plan to publicly release the code, the pre-trained models, and our dataset to support ongoing research in this critical area.


Key findings
The proposed method achieves high detection accuracy (above 90%) on average. Performance is higher on videos generated by open-source models compared to state-of-the-art closed-source models. The method outperforms the DeMamba model on the GenVideo dataset using fewer training parameters and a smaller dataset.
Approach
The approach uses features extracted from pre-trained visual models (SigLIP and VideoMAE) trained on real videos. A training-free distance-based method and a training-based linear classification model are used to distinguish between real and AI-generated videos based on these extracted features.
Datasets
VID-AID dataset (containing ~10,000 AI-generated videos from 9 different text-to-video models and 4,000 real videos from YouTube-VOS), GenVideo dataset (used for comparison).
Model(s)
SigLIP (image encoder), VideoMAE (video encoder), a simple linear classification layer trained on top of extracted features.
Author countries
USA