Abstract:The Artificial Intelligence (AI) for Human-Robot Interaction (HRI) Symposium has been a successful venue of discussion and collaboration on AI theory and methods aimed at HRI since 2014. This year, after a review of the achievements of the AI-HRI community over the last decade in 2021, we are focusing on a visionary theme: exploring the future of AI-HRI. Accordingly, we added a Blue Sky Ideas track to foster a forward-thinking discussion on future research at the intersection of AI and HRI. As always, we appreciate all contributions related to any topic on AI/HRI and welcome new researchers who wish to take part in this growing community. With the success of past symposia, AI-HRI impacts a variety of communities and problems, and has pioneered the discussions in recent trends and interests. This year's AI-HRI Fall Symposium aims to bring together researchers and practitioners from around the globe, representing a number of university, government, and industry laboratories. In doing so, we hope to accelerate research in the field, support technology transition and user adoption, and determine future directions for our group and our research.
Abstract:Autonomous robots must communicate about their decisions to gain trust and acceptance. When doing so, robots must determine which actions are causal, i.e., which directly give rise to the desired outcome, so that these actions can be included in explanations. In behavior learning in psychology, this sort of reasoning during an action sequence has been studied extensively in the context of imitation learning. And yet, these techniques and empirical insights are rarely applied to human-robot interaction (HRI). In this work, we discuss the relevance of behavior learning insights for robot intent communication, and present the first application of these insights for a robot to efficiently communicate its intent by selectively explaining the causal actions in an action sequence.
Abstract:Previous research has shown that the fairness and the legitimacy of a moral decision-maker are important for people's acceptance of and compliance with the decision-maker. As technology rapidly advances, there have been increasing hopes and concerns about building artificially intelligent entities that are designed to intervene against norm violations. However, it is unclear how people would perceive artificial moral regulators that impose punishment on human wrongdoers. Grounded in theories of psychology and law, we predict that the perceived fairness of punishment imposed by a robot would increase the legitimacy of the robot functioning as a moral regulator, which would in turn, increase people's willingness to accept and comply with the robot's decisions. We close with a conceptual framework for building a robot moral regulator that successfully can regulate norm violations.