Abstract:Robots are becoming increasingly intelligent and can autonomously perform tasks such as navigating between locations. However, human oversight remains crucial. This study compared two hands-free methods for directing mobile robots: voice control and gesture control. These methods were tested with the human stationary and walking freely. We hypothesized that walking with the robot would lead to higher intuitiveness ratings and better task performance due to increased stimulus-response compatibility, assuming humans align themselves with the robot. In a 2x2 within-subject design, 218 participants guided the quadrupedal robot Spot using 90 degrees rotation and walk-forward commands. After each trial, participants rated the intuitiveness of the command mapping, while post-experiment interviews were used to gather the participants' preferences. Results showed that voice control combined with walking with Spot was the most favored and intuitive, while gesture control while standing caused confusion for left/right commands. Despite this, 29% of participants preferred gesture control, citing task engagement and visual congruence as reasons. An odometry-based analysis revealed that participants aligned behind Spot, particularly in the gesture control condition, when allowed to walk. In conclusion, voice control with walking produced the best outcomes. Improving physical ergonomics and adjusting gesture types could improve the effectiveness of gesture control.
Abstract:Robots are becoming increasingly intelligent and can autonomously perform tasks such as navigating between locations. However, human oversight remains crucial. This study compared two hands-free methods for directing mobile robots: voice control and gesture control. These methods were tested with the human stationary and walking freely. We hypothesized that walking with the robot would lead to higher intuitiveness ratings and better task performance due to increased stimulus-response compatibility, assuming humans align themselves with the robot. In a 2x2 within-subject design, 218 participants guided the quadrupedal robot Spot using 90 degrees rotation and walk-forward commands. After each trial, participants rated the intuitiveness of the command mapping, while post-experiment interviews were used to gather the participants' preferences. Results showed that voice control combined with walking with Spot was the most favored and intuitive, while gesture control while standing caused confusion for left/right commands. Despite this, 29% of participants preferred gesture control, citing task engagement and visual congruence as reasons. An odometry-based analysis revealed that participants aligned behind Spot, particularly in the gesture control condition, when allowed to walk. In conclusion, voice control with walking produced the best outcomes. Improving physical ergonomics and adjusting gesture types could improve the effectiveness of gesture control.
Abstract:Recent advancements in AI have sped up the evolution of versatile robot designs. Chess provides a standardized environment that allows for the evaluation of the influence of robot behaviors on human behavior. This article presents an open-source chess robot for human-robot interaction (HRI) research, specifically focusing on verbal and non-verbal interactions. OpenChessRobot recognizes chess pieces using computer vision, executes moves, and interacts with the human player using voice and robotic gestures. We detail the software design, provide quantitative evaluations of the robot's efficacy and offer a guide for its reproducibility. The code and are accessible on GitHub: https://github.com/renchizhhhh/OpenChessRobot