TriDF: Evaluating Perception, Detection, and Hallucination for Interpretable DeepFake Detection
Authors: Jian-Yu Jiang-Lin, Kang-Yang Huang, Ling Zou, Ling Lo, Sheng-Ping Yang, Yu-Wen Tseng, Kun-Hsiang Lin, Chia-Ling Chen, Yu-Ting Ta, Yan-Tsung Wang, Po-Ching Chen, Hongxia Xie, Hong-Han Shuai, Wen-Huang Cheng
Published: 2025-12-11 14:01:01+00:00
AI Summary
This paper introduces TriDF, a comprehensive benchmark designed for interpretable DeepFake detection across image, video, and audio modalities, encompassing 16 DeepFake types from advanced synthesis models. TriDF evaluates models on three crucial aspects: Perception (identifying manipulation artifacts), Detection (classification performance), and Hallucination (explanation reliability). Experiments on state-of-the-art multimodal large language models demonstrate that while accurate perception is vital for reliable detection, hallucination can significantly undermine decision-making, emphasizing the interdependence of these three factors.
Abstract
Advances in generative modeling have made it increasingly easy to fabricate realistic portrayals of individuals, creating serious risks for security, communication, and public trust. Detecting such person-driven manipulations requires systems that not only distinguish altered content from authentic media but also provide clear and reliable reasoning. In this paper, we introduce TriDF, a comprehensive benchmark for interpretable DeepFake detection. TriDF contains high-quality forgeries from advanced synthesis models, covering 16 DeepFake types across image, video, and audio modalities. The benchmark evaluates three key aspects: Perception, which measures the ability of a model to identify fine-grained manipulation artifacts using human-annotated evidence; Detection, which assesses classification performance across diverse forgery families and generators; and Hallucination, which quantifies the reliability of model-generated explanations. Experiments on state-of-the-art multimodal large language models show that accurate perception is essential for reliable detection, but hallucination can severely disrupt decision-making, revealing the interdependence of these three aspects. TriDF provides a unified framework for understanding the interaction between detection accuracy, evidence identification, and explanation reliability, offering a foundation for building trustworthy systems that address real-world synthetic media threats.