Effect of AI Performance, Risk Perception, and Trust on Human Dependence in Deepfake Detection AI system

Authors: Yingfan Zhou, Ester Chen, Manasa Pisipati, Aiping Xiong, Sarah Rajtmajer

Published: 2025-08-03 20:00:10+00:00

AI Summary

This study investigates how AI performance in deepfake detection affects human trust and reliance on the AI. Using an online experiment with 400 participants, the researchers examined how varying AI performance impacts trust and dependence in identifying synthetic images, revealing how users calibrate their reliance based on perceived risk and AI predictions.

Abstract

Synthetic images, audio, and video can now be generated and edited by Artificial Intelligence (AI). In particular, the malicious use of synthetic data has raised concerns about potential harms to cybersecurity, personal privacy, and public trust. Although AI-based detection tools exist to help identify synthetic content, their limitations often lead to user mistrust and confusion between real and fake content. This study examines the role of AI performance in influencing human trust and decision making in synthetic data identification. Through an online human subject experiment involving 400 participants, we examined how varying AI performance impacts human trust and dependence on AI in deepfake detection. Our findings indicate how participants calibrate their dependence on AI based on their perceived risk and the prediction results provided by AI. These insights contribute to the development of transparent and explainable AI systems that better support everyday users in mitigating the harms of synthetic media.


Key findings
Participants' trust in the AI increased with reported lower false positive rates, but overall trust remained low. High perceived risk led participants to prefer AI systems with higher false positive rates (prioritizing caution), while low perceived risk led to preference for lower false positive rate systems. The study found a three-way interaction between perceived risk, AI performance, and AI predictions on human decision-making.
Approach
The researchers conducted an online experiment with 400 participants, manipulating AI performance (high vs. low false positive rate) and perceived risk (high vs. low). Participants identified real and synthetic images with AI prediction assistance, and their trust and reliance on the AI were measured.
Datasets
A publicly available scientific dataset [37] containing real and synthetic images labeled by gender and ethnicity.
Model(s)
No specific deepfake detection model was used; the study simulated AI performance with varying false positive rates.
Author countries
USA