Abstract:Robots are becoming increasingly intelligent and can autonomously perform tasks such as navigating between locations. However, human oversight remains crucial. This study compared two hands-free methods for directing mobile robots: voice control and gesture control. These methods were tested with the human stationary and walking freely. We hypothesized that walking with the robot would lead to higher intuitiveness ratings and better task performance due to increased stimulus-response compatibility, assuming humans align themselves with the robot. In a 2x2 within-subject design, 218 participants guided the quadrupedal robot Spot using 90 degrees rotation and walk-forward commands. After each trial, participants rated the intuitiveness of the command mapping, while post-experiment interviews were used to gather the participants' preferences. Results showed that voice control combined with walking with Spot was the most favored and intuitive, while gesture control while standing caused confusion for left/right commands. Despite this, 29% of participants preferred gesture control, citing task engagement and visual congruence as reasons. An odometry-based analysis revealed that participants aligned behind Spot, particularly in the gesture control condition, when allowed to walk. In conclusion, voice control with walking produced the best outcomes. Improving physical ergonomics and adjusting gesture types could improve the effectiveness of gesture control.
Abstract:Robots are becoming increasingly intelligent and can autonomously perform tasks such as navigating between locations. However, human oversight remains crucial. This study compared two hands-free methods for directing mobile robots: voice control and gesture control. These methods were tested with the human stationary and walking freely. We hypothesized that walking with the robot would lead to higher intuitiveness ratings and better task performance due to increased stimulus-response compatibility, assuming humans align themselves with the robot. In a 2x2 within-subject design, 218 participants guided the quadrupedal robot Spot using 90 degrees rotation and walk-forward commands. After each trial, participants rated the intuitiveness of the command mapping, while post-experiment interviews were used to gather the participants' preferences. Results showed that voice control combined with walking with Spot was the most favored and intuitive, while gesture control while standing caused confusion for left/right commands. Despite this, 29% of participants preferred gesture control, citing task engagement and visual congruence as reasons. An odometry-based analysis revealed that participants aligned behind Spot, particularly in the gesture control condition, when allowed to walk. In conclusion, voice control with walking produced the best outcomes. Improving physical ergonomics and adjusting gesture types could improve the effectiveness of gesture control.
Abstract:Despite the significant advancements in computer vision models, their ability to generalize to novel object-attribute compositions remains limited. Existing methods for Compositional Zero-Shot Learning (CZSL) mainly focus on image classification. This paper aims to enhance CZSL in object detection without forgetting prior learned knowledge. We use Grounding DINO and incorporate Compositional Soft Prompting (CSP) into it and extend it with Compositional Anticipation. We achieve a 70.5% improvement over CSP on the harmonic mean (HM) between seen and unseen compositions on the CLEVR dataset. Furthermore, we introduce Contrastive Prompt Tuning to incrementally address model confusion between similar compositions. We demonstrate the effectiveness of this method and achieve an increase of 14.5% in HM across the pretrain, increment, and unseen sets. Collectively, these methods provide a framework for learning various compositions with limited data, as well as improving the performance of underperforming compositions when additional data becomes available.
Abstract:Recent advancements in AI have sped up the evolution of versatile robot designs. Chess provides a standardized environment that allows for the evaluation of the influence of robot behaviors on human behavior. This article presents an open-source chess robot for human-robot interaction (HRI) research, specifically focusing on verbal and non-verbal interactions. OpenChessRobot recognizes chess pieces using computer vision, executes moves, and interacts with the human player using voice and robotic gestures. We detail the software design, provide quantitative evaluations of the robot's efficacy and offer a guide for its reproducibility. The code and are accessible on GitHub: https://github.com/renchizhhhh/OpenChessRobot
Abstract:Hand gesture recognition systems provide a natural way for humans to interact with computer systems. Although various algorithms have been designed for this task, a host of external conditions, such as poor lighting or distance from the camera, make it difficult to create an algorithm that performs well across a range of environments. In this work, we present GRLib: an open-source Python library able to detect and classify static and dynamic hand gestures. Moreover, the library can be trained on existing data for improved classification robustness. The proposed solution utilizes a feed from an RGB camera. The retrieved frames are then subjected to data augmentation and passed on to MediaPipe Hands to perform hand landmark detection. The landmarks are then classified into their respective gesture class. The library supports dynamic hand gestures through trajectories and keyframe extraction. It was found that the library outperforms another publicly available HGR system - MediaPipe Solutions, on three diverse, real-world datasets. The library is available at https://github.com/mikhail-vlasenko/grlib and can be installed with pip.