Identity Deepfake Threats to Biometric Authentication Systems: Public and Expert Perspectives

Authors: Shijing He, Yaxiong Lei, Zihan Zhang, Yuzhou Sun, Shujun Li, Chi Zhang, Juan Ye

Published: 2025-06-07 15:02:23+00:00

AI Summary

This research investigates public and expert perceptions of AI-generated deepfake threats to biometric authentication. A mixed-methods study (survey and interviews) reveals a disconnect between public reliance on biometrics and expert concerns about spoofing, highlighting the need for a tri-layer mitigation framework prioritizing dynamic biometrics, data governance, and education.

Abstract

Generative AI (Gen-AI) deepfakes pose a rapidly evolving threat to biometric authentication, yet a significant gap exists between expert understanding of these risks and public perception. This disconnection creates critical vulnerabilities in systems trusted by millions. To bridge this gap, we conducted a comprehensive mixed-method study, surveying 408 professionals across key sectors and conducting in-depth interviews with 37 participants (25 experts, 12 general public [non-experts]). Our findings reveal a paradox: while the public increasingly relies on biometrics for convenience, experts express grave concerns about the spoofing of static modalities like face and voice recognition. We found significant demographic and sector-specific divides in awareness and trust, with finance professionals, for example, showing heightened skepticism. To systematically analyze these threats, we introduce a novel Deepfake Kill Chain model, adapted from Hutchins et al.'s cybersecurity frameworks to map the specific attack vectors used by malicious actors against biometric systems. Based on this model and our empirical findings, we propose a tri-layer mitigation framework that prioritizes dynamic biometric signals (e.g., eye movements), robust privacy-preserving data governance, and targeted educational initiatives. This work provides the first empirically grounded roadmap for defending against AI-generated identity threats by aligning technical safeguards with human-centered insights.


Key findings
A significant gap exists between public perception and expert understanding of deepfake threats to biometrics. Younger, tech-savvy professionals show higher AI familiarity. Dynamic biometric signals (e.g., eye movements) are identified as more robust against deepfakes than static modalities.
Approach
The study uses a mixed-methods approach, combining a survey of 408 professionals and in-depth interviews with 37 participants (experts and public) to understand perceptions of deepfake threats to biometric authentication. A novel Deepfake Kill Chain model is introduced to analyze attack vectors, and a tri-layer mitigation framework is proposed.
Datasets
Data from a survey of 408 professionals and interviews with 37 participants (25 experts, 12 general public). The abstract mentions using selected Gen-AI deepfake images and videos in interviews but doesn't specify the datasets.
Model(s)
UNKNOWN. The paper discusses deepfake generation and detection models (GANs, VAEs, diffusion models, etc.) in the related work section but doesn't use them in its main contribution.
Author countries
United Kingdom