Abstract:There are challenges that must be overcome to make recommender systems useful in healthcare settings. The reasons are varied: the lack of publicly available clinical data, the difficulty that users may have in understanding the reasons why a recommendation was made, the risks that may be involved in following that recommendation, and the uncertainty about its effectiveness. In this work, we address these challenges with a recommendation model that leverages the structure of psychometric data to provide visual explanations that are faithful to the model and interpretable by care professionals. We focus on a narrow healthcare niche, gerontological primary care, to show that the proposed recommendation model can assist the attending professional in the creation of personalised care plans. We report results of a comparative offline performance evaluation of the proposed model on healthcare datasets that were collected by research partners in Brazil, as well as the results of a user study that evaluates the interpretability of the visual explanations the model generates. The results suggest that the proposed model can advance the application of recommender systems in this healthcare niche, which is expected to grow in demand , opportunities, and information technology needs as demographic changes become more pronounced.




Abstract:Explanations are crucial for improving users' transparency, persuasiveness, engagement, and trust in Recommender Systems (RSs). However, evaluating the effectiveness of explanation algorithms regarding those goals remains challenging due to existing offline metrics' limitations. This paper introduces new metrics for the evaluation and validation of explanation algorithms based on the items and properties used to form the sentence of an explanation. Towards validating the metrics, the results of three state-of-the-art post-hoc explanation algorithms were evaluated for six RSs, comparing the offline metrics results with those of an online user study. The findings show the proposed offline metrics can effectively measure the performance of explanation algorithms and highlight a trade-off between the goals of transparency and trust, which are related to popular properties, and the goals of engagement and persuasiveness, which are associated with the diversification of properties displayed to users. Furthermore, the study contributes to the development of more robust evaluation methods for explanation algorithms in RSs.