What Exactly is a Deepfake?

Authors: Yizhi Liu, Balaji Padmanabhan, Siva Viswanathan

Published: 2025-10-25 03:02:39+00:00

AI Summary

This paper analyzes 826 peer-reviewed publications (2017-2025) to systematically examine how deepfakes are conceptualized in academic literature. Using LLMs for content analysis, the study categorizes deepfake definitions across dimensions like Intent (deceptive vs. non-deceptive) and Manipulation Granularity. Key findings highlight the pervasive threat focus but also track a crucial temporal shift toward recognizing beneficial applications in domains like healthcare and education.

Abstract

Deepfake technologies are often associated with deception, misinformation, and identity fraud, raising legitimate societal concerns. Yet such narratives may obscure a key insight: deepfakes embody sophisticated capabilities for sensory manipulation that can alter human perception, potentially enabling beneficial applications in domains such as healthcare and education. Realizing this potential, however, requires understanding how the technology is conceptualized across disciplines. This paper analyzes 826 peer-reviewed publications from 2017 to 2025 to examine how deepfakes are defined and understood in the literature. Using large language models for content analysis, we categorize deepfake conceptualizations along three dimensions: Identity Source (the relationship between original and generated content), Intent (deceptive versus non-deceptive purposes), and Manipulation Granularity (holistic versus targeted modifications). Results reveal substantial heterogeneity that challenges simplified public narratives. Notably, a subset of studies discuss non-deceptive applications, highlighting an underexplored potential for social good. Temporal analysis shows an evolution from predominantly threat-focused views (2017 to 2019) toward recognition of beneficial applications (2022 to 2025). This study provides an empirical foundation for developing nuanced governance and research frameworks that distinguish applications warranting prohibition from those deserving support, showing that, with safeguards, deepfakes' realism can serve important social purposes beyond deception.


Key findings
The analysis revealed that 94.7% of academic papers conceptualize deepfakes primarily through a threat-focused lens, emphasizing deceptive applications like complete identity (face) swapping. However, temporal analysis showed a gradual diversification, with the percentage of papers discussing non-deceptive applications (e.g., research enhancement, education) increasing significantly between 2022 and 2025. The study concludes that greater conceptual clarity is necessary to develop nuanced governance frameworks that distinguish harmful applications from potentially beneficial ones.
Approach
The authors conducted a systematic review and content analysis of 826 academic publications using a developed three-dimensional conceptual framework (Identity Source, Intent, Granularity). They utilized a large language model (DeepSeek-R1) for automated definition extraction and categorization, validating the output through manual review of conflicting reasoning traces.
Datasets
826 peer-reviewed publications (2017-2025) from various disciplines.
Model(s)
DeepSeek-R1 (Large Language Model)
Author countries
USA