Abstract:In 2018 the European Commission highlighted the demand of a human-centered approach to AI. Such a claim is gaining even more relevance considering technologies specifically designed to directly interact and physically collaborate with human users in the real world. This is notably the case of social robots. The domain of Human-Robot Interaction (HRI) emerged to investigate these issues. "Human-robot trust" has been highlighted as one of the most challenging and intriguing factors influencing HRI. On the one hand, user studies and technical experts underline how trust is a key element to facilitate users' acceptance, consequently increasing the chances to pursue the given task. On the other hand, such a phenomenon raises also ethical and philosophical concerns leading scholars in these domains to argue that humans should not trust robots. However, trust in HRI is not an index of fragility, it is rooted in anthropomorphism, and it is a natural characteristic of every human being. Thus, instead of focusing solely on how to inspire user trust in social robots, this paper argues that what should be investigated is to what extent and for which purpose it is suitable to trust robots. Such an endeavour requires an interdisciplinary approach taking into account (i) technical needs and (ii) psychological implications.
Abstract:The significant advances in autonomous systems together with an immensely wider application domain have increased the need for trustable intelligent systems. Explainable artificial intelligence is gaining considerable attention among researchers and developers to address this requirement. Although there is an increasing number of works on interpretable and transparent machine learning algorithms, they are mostly intended for the technical users. Explanations for the end-user have been neglected in many usable and practical applications. In this work, we present the Contextual Importance (CI) and Contextual Utility (CU) concepts to extract explanations that are easily understandable by experts as well as novice users. This method explains the prediction results without transforming the model into an interpretable one. We present an example of providing explanations for linear and non-linear models to demonstrate the generalizability of the method. CI and CU are numerical values that can be represented to the user in visuals and natural language form to justify actions and explain reasoning for individual instances, situations, and contexts. We show the utility of explanations in car selection example and Iris flower classification by presenting complete (i.e. the causes of an individual prediction) and contrastive explanation (i.e. contrasting instance against the instance of interest). The experimental results show the feasibility and validity of the provided explanation methods.
Abstract:This paper presents an initial design concept and specification of a civilian Unmanned Aerial Vehicle (UAV) management simulation system that focuses on explainability for the human-in-the-loop control of semi-autonomous UAVs. The goal of the system is to facilitate the operator intervention in critical scenarios (e.g. avoid safety issues or financial risks). Explainability is supported via user-friendly abstractions on Belief-Desire-Intention agents. To evaluate the effectiveness of the system, a human-computer interaction study is proposed.