Abstract:AI methods are finding an increasing number of applications, but their often black-box nature has raised concerns about accountability and trust. The field of explainable artificial intelligence (XAI) has emerged in response to the need for human understanding of AI models. Evolutionary computation (EC), as a family of powerful optimization and learning tools, has significant potential to contribute to XAI. In this paper, we provide an introduction to XAI and review various techniques in current use for explaining machine learning (ML) models. We then focus on how EC can be used in XAI, and review some XAI approaches which incorporate EC techniques. Additionally, we discuss the application of XAI principles within EC itself, examining how these principles can shed some light on the behavior and outcomes of EC algorithms in general, on the (automatic) configuration of these algorithms, and on the underlying problem landscapes that these algorithms optimize. Finally, we discuss some open challenges in XAI and opportunities for future research in this field using EC. Our aim is to demonstrate that EC is well-suited for addressing current problems in explainability and to encourage further exploration of these methods to contribute to the development of more transparent and trustworthy ML models and EC algorithms.
Abstract:Evolutionary algorithms are widely used to solve optimisation problems. However, challenges of transparency arise in both visualising the processes of an optimiser operating through a problem and understanding the problem features produced from many-objective problems, where comprehending four or more spatial dimensions is difficult. This work considers the visualisation of a population as an optimisation process executes. We have adapted an existing visualisation technique to multi- and many-objective problem data, enabling a user to visualise the EA processes and identify specific problem characteristics and thus providing a greater understanding of the problem landscape. This is particularly valuable if the problem landscape is unknown, contains unknown features or is a many-objective problem. We have shown how using this framework is effective on a suite of multi- and many-objective benchmark test problems, optimising them with NSGA-II and NSGA-III.