LAVID: An Agentic LVLM Framework for Diffusion-Generated Video Detection

Authors: Qingyuan Liu, Yun-Yun Tsai, Ruijian Zha, Victoria Li, Pengyuan Shi, Chengzhi Mao, Junfeng Yang

Published: 2025-02-20 19:34:58+00:00

AI Summary

LAVID is a novel training-free framework for detecting diffusion-generated videos using Large Vision Language Models (LVLMs). It enhances LVLMs by automatically selecting relevant explicit knowledge tools and employing an online adaptation method for structured prompts, significantly improving detection accuracy.

Abstract

The impressive achievements of generative models in creating high-quality videos have raised concerns about digital integrity and privacy vulnerabilities. Recent works of AI-generated content detection have been widely studied in the image field (e.g., deepfake), yet the video field has been unexplored. Large Vision Language Model (LVLM) has become an emerging tool for AI-generated content detection for its strong reasoning and multimodal capabilities. It breaks the limitations of traditional deep learning based methods faced with like lack of transparency and inability to recognize new artifacts. Motivated by this, we propose LAVID, a novel LVLMs-based ai-generated video detection with explicit knowledge enhancement. Our insight list as follows: (1) The leading LVLMs can call external tools to extract useful information to facilitate its own video detection task; (2) Structuring the prompt can affect LVLM's reasoning ability to interpret information in video content. Our proposed pipeline automatically selects a set of explicit knowledge tools for detection, and then adaptively adjusts the structure prompt by self-rewriting. Different from prior SOTA that trains additional detectors, our method is fully training-free and only requires inference of the LVLM for detection. To facilitate our research, we also create a new benchmark vidfor with high-quality videos generated from multiple sources of video generation tools. Evaluation results show that LAVID improves F1 scores by 6.2 to 30.2% over the top baselines on our datasets across four SOTA LVLMs.


Key findings
LAVID improves F1 scores by 6.2% to 30.2% over top baselines across four SOTA LVLMs on the VidForensic dataset. The structured prompt approach significantly reduces hallucination. LAVID also shows competitive performance in Deepfake detection on Celeb-DF-v1.
Approach
LAVID leverages LVLMs' reasoning capabilities to automatically select a set of explicit knowledge tools for video analysis. It uses a structured prompt format and an online adaptation process to refine the prompt based on model feedback, improving detection accuracy without requiring additional training.
Datasets
VidForensic (a new benchmark dataset containing 1400+ high-quality videos generated from multiple sources, including Kling, Runway Gen3, and OpenSORA), PANDA-70M, VidProM, Celeb-DF-v1, FaceForensics++
Model(s)
Llava-OV-7B, Qwen-VL-Max, Gemini-1.5-pro, GPT-4o
Author countries
USA