From Prediction to Explanation: Multimodal, Explainable, and Interactive Deepfake Detection Framework for Non-Expert Users

Authors: Shahroz Tariq, Simon S. Woo, Priyanka Singh, Irena Irmalasari, Saakshi Gupta, Dev Gupta

Published: 2025-08-11 03:55:47+00:00

AI Summary

This paper introduces DF-P2E, a multimodal framework for interpretable deepfake detection. It integrates visual saliency maps, natural language captions, and large language model-generated narratives to provide explanations accessible to non-expert users, achieving competitive detection performance.

Abstract

The proliferation of deepfake technologies poses urgent challenges and serious risks to digital integrity, particularly within critical sectors such as forensics, journalism, and the legal system. While existing detection systems have made significant progress in classification accuracy, they typically function as black-box models, offering limited transparency and minimal support for human reasoning. This lack of interpretability hinders their usability in real-world decision-making contexts, especially for non-expert users. In this paper, we present DF-P2E (Deepfake: Prediction to Explanation), a novel multimodal framework that integrates visual, semantic, and narrative layers of explanation to make deepfake detection interpretable and accessible. The framework consists of three modular components: (1) a deepfake classifier with Grad-CAM-based saliency visualisation, (2) a visual captioning module that generates natural language summaries of manipulated regions, and (3) a narrative refinement module that uses a fine-tuned Large Language Model (LLM) to produce context-aware, user-sensitive explanations. We instantiate and evaluate the framework on the DF40 benchmark, the most diverse deepfake dataset to date. Experiments demonstrate that our system achieves competitive detection performance while providing high-quality explanations aligned with Grad-CAM activations. By unifying prediction and explanation in a coherent, human-aligned pipeline, this work offers a scalable approach to interpretable deepfake detection, advancing the broader vision of trustworthy and transparent AI systems in adversarial media environments.


Key findings
CLIP-large achieved the best deepfake detection performance (AUC 0.913 on DF40). BLIP2-Flan-T5-xxl showed the best captioning performance, but BLIP-large offered a better balance of performance and speed. Human evaluation showed high ratings for usefulness, understandability, and explainability of the framework's explanations.
Approach
DF-P2E uses a three-module approach: a deepfake classifier generating Grad-CAM saliency maps, a visual captioning module describing manipulated regions, and a narrative refinement module using an LLM to create user-friendly explanations.
Datasets
DF40 benchmark dataset
Model(s)
XceptionNet, CLIP-base, CLIP-large, BLIP, BLIP2, GIT, OFA, ViT-GPT2, PaliGemma, LLaMA-3.2-11B-Vision
Author countries
Australia, South Korea