Presentation Attack Detection with Advanced CNN Models for Noncontact-based Fingerprint Systems

Authors: Sandip Purnapatra, Conor Miller-Lynch, Stephen Miner, Yu Liu, Keivan Bahmani, Soumyabrata Dey, Stephanie Schuckers

Published: 2023-03-09 18:01:10+00:00

AI Summary

This research presents a new presentation attack detection (PAD) dataset for contactless fingerprint systems, comprising live and spoof fingerprint images captured using various methods and materials. Using DenseNet-121 and NasNetMobile models trained on this dataset, the authors achieved high PAD accuracy with low APCER (0.14%) and BPCER (0.18%).

Abstract

Touch-based fingerprint biometrics is one of the most popular biometric modalities with applications in several fields. Problems associated with touch-based techniques such as the presence of latent fingerprints and hygiene issues due to many people touching the same surface motivated the community to look for non-contact-based solutions. For the last few years, contactless fingerprint systems are on the rise and in demand because of the ability to turn any device with a camera into a fingerprint reader. Yet, before we can fully utilize the benefit of noncontact-based methods, the biometric community needs to resolve a few concerns such as the resiliency of the system against presentation attacks. One of the major obstacles is the limited publicly available data sets with inadequate spoof and live data. In this publication, we have developed a Presentation attack detection (PAD) dataset of more than 7500 four-finger images and more than 14,000 manually segmented single-fingertip images, and 10,000 synthetic fingertips (deepfakes). The PAD dataset was collected from six different Presentation Attack Instruments (PAI) of three different difficulty levels according to FIDO protocols, with five different types of PAI materials, and different smartphone cameras with manual focusing. We have utilized DenseNet-121 and NasNetMobile models and our proposed dataset to develop PAD algorithms and achieved PAD accuracy of Attack presentation classification error rate (APCER) 0.14% and Bonafide presentation classification error rate (BPCER) 0.18%. We have also reported the test results of the models against unseen spoof types to replicate uncertain real-world testing scenarios.


Key findings
The proposed DenseNet-121 model achieved an APCER of 0.14% and a BPCER of 0.18% on known PAIs. The Keras DenseNet-121 model showed promising results (APCER of 0% for latex PAIs), suggesting good generalization ability, although performance on other unseen PAIs varied. The results improved state-of-the-art performance.
Approach
The authors created a new PAD dataset with diverse spoofing methods and difficulty levels, then trained DenseNet-121 and NasNetMobile models to classify live and spoof fingerprint images. Image processing involved manual segmentation of fingertips and patch extraction for model training.
Datasets
A new PAD dataset of over 23,000 single-fingertip images, including live data and spoofs from six different Presentation Attack Instruments (PAIs) of three difficulty levels, plus 10,000 synthetic fingertips generated using StyleGAN with ADA.
Model(s)
DenseNet-121, NasNetMobile (both with and without Keras implementations).
Author countries
USA