Abstract:Following the recent interest in applying the Hyperdimensional Computing paradigm in medical context to power up the performance of general machine learning applied to biomedical data, this study represents the first attempt at employing such techniques to solve the problem of classification of Attention Deficit Hyperactivity Disorder using electroencephalogram signals. Making use of a spatio-temporal encoder, and leveraging the properties of HDC, the proposed model achieves an accuracy of 88.9%, outperforming traditional Deep Neural Networks benchmark models. The core of this research is not only to enhance the classification accuracy of the model but also to explore its efficiency in terms of the required training data: a critical finding of the study is the identification of the minimum number of patients needed in the training set to achieve a sufficient level of accuracy. To this end, the accuracy of our model trained with only $7$ of the $79$ patients is comparable to the one from benchmarks trained on the full dataset. This finding underscores the model's efficiency and its potential for quick and precise ADHD diagnosis in medical settings where large datasets are typically unattainable.
Abstract:In the context of artificial intelligence, the inherent human attribute of engaging in logical reasoning to facilitate decision-making is mirrored by the concept of explainability, which pertains to the ability of a model to provide a clear and interpretable account of how it arrived at a particular outcome. This study explores explainability techniques for binary deep neural architectures in the framework of emotion classification through video analysis. We investigate the optimization of input features to binary classifiers for emotion recognition, with face landmarks detection using an improved version of the Integrated Gradients explainability method. The main contribution of this paper consists in the employment of an innovative explainable artificial intelligence algorithm to understand the crucial facial landmarks movements during emotional feeling, using this information also for improving the performances of deep learning-based emotion classifiers. By means of explainability, we can optimize the number and the position of the facial landmarks used as input features for facial emotion recognition, lowering the impact of noisy landmarks and thus increasing the accuracy of the developed models. In order to test the effectiveness of the proposed approach, we considered a set of deep binary models for emotion classification trained initially with a complete set of facial landmarks, which are progressively reduced based on a suitable optimization procedure. The obtained results prove the robustness of the proposed explainable approach in terms of understanding the relevance of the different facial points for the different emotions, also improving the classification accuracy and diminishing the computational cost.