Abstract:Presentation attack is a challenging issue that persists in the security of automatic fingerprint recognition systems. This paper proposes a novel explainable residual slim network that detects the presentation attack by representing the visual features in the input fingerprint sample. The encoder-decoder of this network along with the channel attention block converts the input sample into its heatmap representation while the modified residual convolutional neural network classifier discriminates between live and spoof fingerprints. The entire architecture of the heatmap generator block and modified ResNet classifier works together in an end-to-end manner. The performance of the proposed model is validated on benchmark liveness detection competition databases i.e. Livdet 2011, 2013, 2015, 2017, and 2019 and the classification accuracy of 96.86\%, 99.84\%, 96.45\%, 96.07\%, 96.27\% are achieved on them, respectively. The performance of the proposed model is compared with the state-of-the-art techniques, and the proposed method outperforms state-of-the-art methods in benchmark protocols of presentation attack detection in terms of classification accuracy.
Abstract:Automatic fingerprint recognition systems are the most extensively used systems for person authentication although they are vulnerable to Presentation attacks. Artificial artifacts created with the help of various materials are used to deceive these systems causing a threat to the security of fingerprint-based applications. This paper proposes a novel end-to-end model to detect fingerprint Presentation attacks. The proposed model incorporates MobileNet as a feature extractor and a Support Vector Classifier as a classifier to detect presentation attacks in cross-material and cross-sensor paradigms. The feature extractor's parameters are learned with the loss generated by the support vector classifier. The proposed model eliminates the need for intermediary data preparation procedures, unlike other static hybrid architectures. The performance of the proposed model has been validated on benchmark LivDet 2011, 2013, 2015, 2017, and 2019 databases, and overall accuracy of 98.64%, 99.50%, 97.23%, 95.06%, and 95.20% is achieved on these databases, respectively. The performance of the proposed model is compared with state-of-the-art methods and the proposed method outperforms in cross-material and cross-sensor paradigms in terms of average classification error.