DynamicLip: Shape-Independent Continuous Authentication via Lip Articulator Dynamics

Authors: Huashan Chen, Yifan Xu, Yue Feng, Ming Jian, Feng Liu, Pengfei Hu, Kebin Peng, Sen He, Zi Wang

Published: 2025-01-02 03:26:29+00:00

AI Summary

This paper proposes DynamicLip, a continuous biometric authentication system using lip articulator dynamics. It extracts shape-independent features from lip movements, achieving high accuracy (99.06%) and robustness against mimic and deepfake attacks.

Abstract

Biometrics authentication has become increasingly popular due to its security and convenience; however, traditional biometrics are becoming less desirable in scenarios such as new mobile devices, Virtual Reality, and Smart Vehicles. For example, while face authentication is widely used, it suffers from significant privacy concerns. The collection of complete facial data makes it less desirable for privacy-sensitive applications. Lip authentication, on the other hand, has emerged as a promising biometrics method. However, existing lip-based authentication methods heavily depend on static lip shape when the mouth is closed, which can be less robust due to lip shape dynamic motion and can barely work when the user is speaking. In this paper, we revisit the nature of lip biometrics and extract shape-independent features from the lips. We study the dynamic characteristics of lip biometrics based on articulator motion. Building on the knowledge, we propose a system for shape-independent continuous authentication via lip articulator dynamics. This system enables robust, shape-independent and continuous authentication, making it particularly suitable for scenarios with high security and privacy requirements. We conducted comprehensive experiments in different environments and attack scenarios and collected a dataset of 50 subjects. The results indicate that our system achieves an overall accuracy of 99.06% and demonstrates robustness under advanced mimic attacks and AI deepfake attacks, making it a viable solution for continuous biometric authentication in various applications.


Key findings
DynamicLip achieves an overall accuracy of 99.06% and demonstrates robustness against mimic, advanced mimic, and AI deepfake attacks. The system's performance is relatively consistent across different cameras, viewing angles, and lip conditions.
Approach
DynamicLip leverages a feature hierarchy encompassing static and dynamic lip features. It uses a Siamese neural network to compare feature vectors extracted from lip videos, enabling continuous authentication.
Datasets
A custom dataset of 50 subjects speaking carefully designed phrases, encompassing variations in lighting, angles, and lip conditions.
Model(s)
Siamese neural network
Author countries
China, USA