Identity Deepfake Threats to Biometric Authentication Systems: Public and Expert Perspectives

Authors: Shijing He, Yaxiong Lei, Zihan Zhang, Yuzhou Sun, Shujun Li, Chi Zhang, Juan Ye

Published: 2025-06-07 15:02:23+00:00

AI Summary

This paper investigates the gap between expert understanding and public perception of Gen-AI deepfake threats to biometric authentication systems. Through a mixed-method study, surveying 408 professionals and interviewing 37 participants (25 experts, 12 general public), the authors propose a novel Deepfake Kill Chain model. Based on their empirical findings, they introduce a tri-layer mitigation framework to defend against AI-generated identity threats by integrating technical safeguards with human-centered insights.

Abstract

Generative AI (Gen-AI) deepfakes pose a rapidly evolving threat to biometric authentication, yet a significant gap exists between expert understanding of these risks and public perception. This disconnection creates critical vulnerabilities in systems trusted by millions. To bridge this gap, we conducted a comprehensive mixed-method study, surveying 408 professionals across key sectors and conducting in-depth interviews with 37 participants (25 experts, 12 general public [non-experts]). Our findings reveal a paradox: while the public increasingly relies on biometrics for convenience, experts express grave concerns about the spoofing of static modalities like face and voice recognition. We found significant demographic and sector-specific divides in awareness and trust, with finance professionals, for example, showing heightened skepticism. To systematically analyze these threats, we introduce a novel Deepfake Kill Chain model, adapted from Hutchins et al.'s cybersecurity frameworks to map the specific attack vectors used by malicious actors against biometric systems. Based on this model and our empirical findings, we propose a tri-layer mitigation framework that prioritizes dynamic biometric signals (e.g., eye movements), robust privacy-preserving data governance, and targeted educational initiatives. This work provides the first empirically grounded roadmap for defending against AI-generated identity threats by aligning technical safeguards with human-centered insights.


Key findings
Experts acknowledge biometric authentication's utility but caution against rapidly evolving Gen-AI deepfake threats, highlighting vulnerabilities in static modalities like face and voice recognition, and advocating for dynamic biometric signals and layered defenses. In contrast, the public largely trusts biometrics for convenience but lacks a deep understanding of deepfake risks, with survey results revealing significant generational and professional divides in AI familiarity and critical security perspectives. Widespread public misconceptions about deepfake risks underscore the urgent need for comprehensive education and improved data governance.
Approach
The authors conducted a comprehensive mixed-method study, combining a survey of 408 professionals across various sectors with 37 semi-structured interviews (25 experts, 12 general public). They analyzed survey data using one-way ANOVA to identify demographic differences and employed inductive thematic analysis for interview transcripts to explore perceptions of deepfake threats. This informed the development of a Deepfake Kill Chain model and a tri-layer mitigation framework.
Datasets
A survey of 408 professionals; 37 semi-structured interviews (25 experts, 12 general public).
Model(s)
UNKNOWN
Author countries
United Kingdom