Abstract:One of the challenges in conducting research on the intersection of the CHI and Human-Robot Interaction (HRI) communities is in addressing the gap of acceptable design research methods between the two. While HRI is focused on interaction with robots and includes design research in its scope, the community is not as accustomed to exploratory design methods as the CHI community. This workshop paper argues for bringing exploratory design, and specifically Research through Design (RtD) methods that have been established in CHI for the past decade to the foreground of HRI. RtD can enable design researchers in the field of HRI to conduct exploratory design work that asks what is the right thing to design and share it within the community.
Abstract:A robot operating in isolation needs to reason over the uncertainty in its model of the world and adapt its own actions to account for this uncertainty. Similarly, a robot interacting with people needs to reason over its uncertainty over the human internal state, as well as over how this state may change, as humans adapt to the robot. This paper summarizes our own work in this area, which depicts the different ways that probabilistic planning and game-theoretic algorithms can enable such reasoning in robotic systems that collaborate with people. We start with a general formulation of the problem as a two-player game with incomplete information. We then articulate the different assumptions within this general formulation, and we explain how these lead to exciting and diverse robot behaviors in real-time interactions with actual human subjects, in a variety of manufacturing, personal robotics and assistive care settings.
Abstract:Human collaborators coordinate effectively their actions through both verbal and non-verbal communication. We believe that the the same should hold for human-robot teams. We propose a formalism that enables a robot to decide optimally between doing a task and issuing an utterance. We focus on two types of utterances: verbal commands, where the robot expresses how it wants its human teammate to behave, and state-conveying actions, where the robot explains why it is behaving this way. Human subject experiments show that enabling the robot to issue verbal commands is the most effective form of communicating objectives, while retaining user trust in the robot. Communicating why information should be done judiciously, since many participants questioned the truthfulness of the robot statements.