Abstract:"The output of a computerised system can only be as accurate as the information entered into it." This rather trivial statement is the basis behind one of the driving concepts in biometric recognition: biometric quality. Quality is nowadays widely regarded as the number one factor responsible for the good or bad performance of automated biometric systems. It refers to the ability of a biometric sample to be used for recognition purposes and produce consistent, accurate, and reliable results. Such a subjective term is objectively estimated by the so-called biometric quality metrics. These algorithms play nowadays a pivotal role in the correct functioning of systems, providing feedback to the users and working as invaluable audit tools. In spite of their unanimously accepted relevance, some of the most used and deployed biometric characteristics are lacking behind in the development of these methods. This is the case of face recognition. After a gentle introduction to the general topic of biometric quality and a review of past efforts in face quality metrics, in the present work, we address the need for better face quality metrics by developing FaceQnet. FaceQnet is a novel opensource face quality assessment tool, inspired and powered by deep learning technology, which assigns a scalar quality measure to facial images, as prediction of their recognition accuracy. Two versions of FaceQnet have been thoroughly evaluated both in this work and also independently by NIST, showing the soundness of the approach and its competitiveness with respect to current state-of-the-art metrics. Even though our work is presented here particularly in the framework of face biometrics, the proposed methodology for building a fully automated quality metric can be very useful and easily adapted to other artificial intelligence tasks.
Abstract:In this paper we develop a Quality Assessment approach for face recognition based on deep learning. The method consists of a Convolutional Neural Network, FaceQnet, that is used to predict the suitability of a specific input image for face recognition purposes. The training of FaceQnet is done using the VGGFace2 database. We employ the BioLab-ICAO framework for labeling the VGGFace2 images with quality information related to their ICAO compliance level. The groundtruth quality labels are obtained using FaceNet to generate comparison scores. We employ the groundtruth data to fine-tune a ResNet-based CNN, making it capable of returning a numerical quality measure for each input image. Finally, we verify if the FaceQnet scores are suitable to predict the expected performance when employing a specific image for face recognition with a COTS face recognition system. Several conclusions can be drawn from this work, most notably: 1) we managed to employ an existing ICAO compliance framework and a pretrained CNN to automatically label data with quality information, 2) we trained FaceQnet for quality estimation by fine-tuning a pre-trained face recognition network (ResNet-50), and 3) we have shown that the predictions from FaceQnet are highly correlated with the face recognition accuracy of a state-of-the-art commercial system not used during development. FaceQnet is publicly available in GitHub.