Abstract:Explanations are crucial for improving users' transparency, persuasiveness, engagement, and trust in Recommender Systems (RSs). However, evaluating the effectiveness of explanation algorithms regarding those goals remains challenging due to existing offline metrics' limitations. This paper introduces new metrics for the evaluation and validation of explanation algorithms based on the items and properties used to form the sentence of an explanation. Towards validating the metrics, the results of three state-of-the-art post-hoc explanation algorithms were evaluated for six RSs, comparing the offline metrics results with those of an online user study. The findings show the proposed offline metrics can effectively measure the performance of explanation algorithms and highlight a trade-off between the goals of transparency and trust, which are related to popular properties, and the goals of engagement and persuasiveness, which are associated with the diversification of properties displayed to users. Furthermore, the study contributes to the development of more robust evaluation methods for explanation algorithms in RSs.