Abstract:In robotics, ensuring that autonomous systems are comprehensible and accountable to users is essential for effective human-robot interaction. This paper introduces a novel approach that integrates user-centered design principles directly into the core of robot path planning processes. We propose a probabilistic framework for automated planning of explanations for robot navigation, where the preferences of different users regarding explanations are probabilistically modeled to tailor the stochasticity of the real-world human-robot interaction and the communication of decisions of the robot and its actions towards humans. This approach aims to enhance the transparency of robot path planning and adapt to diverse user explanation needs by anticipating the types of explanations that will satisfy individual users.
Abstract:To bring robots into human everyday life, their capacity for social interaction must increase. One way for robots to acquire social skills is by assigning them the concept of identity. This research focuses on the concept of \textit{Explanation Identity} within the broader context of robots' roles in society, particularly their ability to interact socially and explain decisions. Explanation Identity refers to the combination of characteristics and approaches robots use to justify their actions to humans. Drawing from different technical and social disciplines, we introduce Explanation Identity as a multidisciplinary concept and discuss its importance in Human-Robot Interaction. Our theoretical framework highlights the necessity for robots to adapt their explanations to the user's context, demonstrating empathy and ethical integrity. This research emphasizes the dynamic nature of robot identity and guides the integration of explanation capabilities in social robots, aiming to improve user engagement and acceptance.
Abstract:Navigation is a must-have skill for any mobile robot. A core challenge in navigation is the need to account for an ample number of possible configurations of environment and navigation contexts. We claim that a mobile robot should be able to explain its navigational choices making its decisions understandable to humans. In this paper, we briefly present our approach to explaining navigational decisions of a robot through visual and textual explanations. We propose a user study to test the understandability and simplicity of the robot explanations and outline our further research agenda.
Abstract:The continued development of robots has enabled their wider usage in human surroundings. Robots are more trusted to make increasingly important decisions with potentially critical outcomes. Therefore, it is essential to consider the ethical principles under which robots operate. In this paper we examine how contrastive and non-contrastive explanations can be used in understanding the ethics of robot action plans. We build upon an existing ethical framework to allow users to make suggestions about plans and receive automatically generated contrastive explanations. Results of a user study indicate that the generated explanations help humans to understand the ethical principles that underlie a robot's plan.
Abstract:In automated planning, the need for explanations arises when there is a mismatch between a proposed plan and the user's expectation. We frame Explainable AI Planning in the context of the plan negotiation problem, in which a succession of hypothetical planning problems are generated and solved. The object of the negotiation is for the user to understand and ultimately arrive at a satisfactory plan. We present the results of a user study that demonstrates that when users ask questions about plans, those questions are contrastive, i.e. "why A rather than B?". We use the data from this study to construct a taxonomy of user questions that often arise during plan negotiation. We formally define our approach to plan negotiation through model restriction as an iterative process. This approach generates hypothetical problems and contrastive plans by restricting the model through constraints implied by user questions. We formally define model-based compilations in PDDL2.1 of each constraint derived from a user question in the taxonomy, and empirically evaluate the compilations in terms of computational complexity. The compilations were implemented as part of an explanation framework that employs iterative model restriction. We demonstrate its benefits in a second user study.
Abstract:The development of robotics and AI agents has enabled their wider usage in human surroundings. AI agents are more trusted to make increasingly important decisions with potentially critical outcomes. It is essential to consider the ethical consequences of the decisions made by these systems. In this paper, we present how contrastive explanations can be used for comparing the ethics of plans. We build upon an existing ethical framework to allow users to make suggestions to plans and receive contrastive explanations.
Abstract:Explainable AI is an important area of research within which Explainable Planning is an emerging topic. In this paper, we argue that Explainable Planning can be designed as a service -- that is, as a wrapper around an existing planning system that utilises the existing planner to assist in answering contrastive questions. We introduce a prototype framework to facilitate this, along with some examples of how a planner can be used to address certain types of contrastive questions. We discuss the main advantages and limitations of such an approach and we identify open questions for Explainable Planning as a service that identify several possible research directions.