Abstract:The interpretability of random forest (RF) models is a research topic of growing interest in the machine learning (ML) community. In the state of the art, RF is considered a powerful learning ensemble given its predictive performance, flexibility, and ease of use. Furthermore, the inner process of the RF model is understandable because it uses an intuitive and intelligible approach for building the RF decision tree ensemble. However, the RF resulting model is regarded as a "black box" because of its numerous deep decision trees. Gaining visibility over the entire process that induces the final decisions by exploring each decision tree is complicated, if not impossible. This complexity limits the acceptance and implementation of RF models in several fields of application. Several papers have tackled the interpretation of RF models. This paper aims to provide an extensive review of methods used in the literature to interpret RF resulting models. We have analyzed these methods and classified them based on different axes. Although this review is not exhaustive, it provides a taxonomy of various techniques that should guide users in choosing the most appropriate tools for interpreting RF models, depending on the interpretability aspects sought. It should also be valuable for researchers who aim to focus their work on the interpretability of RF or ML black boxes in general.
Abstract:Motivated by the interpretability question in ML models as a crucial element for the successful deployment of AI systems, this paper focuses on rule extraction as a means for neural networks interpretability. Through a systematic literature review, different approaches for extracting rules from feedforward neural networks, an important block in deep learning models, are identified and explored. The findings reveal a range of methods developed for over two decades, mostly suitable for shallow neural networks, with recent developments to meet deep learning models' challenges. Rules offer a transparent and intuitive means of explaining neural networks, making this study a comprehensive introduction for researchers interested in the field. While the study specifically addresses feedforward networks with supervised learning and crisp rules, future work can extend to other network types, machine learning methods, and fuzzy rule extraction.