Abstract:As automated vehicles (AVs) become increasingly popular, the question arises as to how cyclists will interact with such vehicles. This study investigated (1) whether cyclists spontaneously notice if a vehicle is driverless, (2) how well they perform a driver-detection task when explicitly instructed, and (3) how they carry out such tasks. Using a Wizard-of-Oz method, 37 participants cycled a designated route and encountered an AV multiple times in two experimental sessions. In Session 1, participants cycled the route uninstructed, while in Session 2, they were instructed to verbally report whether they detected the presence or absence of a driver. Additionally, we recorded the participants' gaze behaviour with eye-tracking and their responses in post-session interviews. The interviews revealed that 30% of the cyclists spontaneously mentioned the absence of a driver (Session 1), and when instructed (Session 2), they detected the absence and presence of the driver with 93% accuracy. The eye-tracking data showed that cyclists looked more frequently and longer at the vehicle in Session 2 compared to Session 1. Furthermore, participants exhibited intermittent sampling of the vehicle, and they looked in front of the vehicle when it was far away and towards the windshield region when it was closer. The post-session interviews also indicated that participants were curious, felt safe, and reported a need to receive information about the AV's driving state. In conclusion, cyclists can detect the absence of a driver in the AV, and this detection may influence their perceptions of safety. Further research is needed to explore these findings in real-world traffic conditions.
Abstract:Robots are becoming increasingly intelligent and can autonomously perform tasks such as navigating between locations. However, human oversight remains crucial. This study compared two hands-free methods for directing mobile robots: voice control and gesture control. These methods were tested with the human stationary and walking freely. We hypothesized that walking with the robot would lead to higher intuitiveness ratings and better task performance due to increased stimulus-response compatibility, assuming humans align themselves with the robot. In a 2x2 within-subject design, 218 participants guided the quadrupedal robot Spot using 90 degrees rotation and walk-forward commands. After each trial, participants rated the intuitiveness of the command mapping, while post-experiment interviews were used to gather the participants' preferences. Results showed that voice control combined with walking with Spot was the most favored and intuitive, while gesture control while standing caused confusion for left/right commands. Despite this, 29% of participants preferred gesture control, citing task engagement and visual congruence as reasons. An odometry-based analysis revealed that participants aligned behind Spot, particularly in the gesture control condition, when allowed to walk. In conclusion, voice control with walking produced the best outcomes. Improving physical ergonomics and adjusting gesture types could improve the effectiveness of gesture control.
Abstract:Robots are becoming increasingly intelligent and can autonomously perform tasks such as navigating between locations. However, human oversight remains crucial. This study compared two hands-free methods for directing mobile robots: voice control and gesture control. These methods were tested with the human stationary and walking freely. We hypothesized that walking with the robot would lead to higher intuitiveness ratings and better task performance due to increased stimulus-response compatibility, assuming humans align themselves with the robot. In a 2x2 within-subject design, 218 participants guided the quadrupedal robot Spot using 90 degrees rotation and walk-forward commands. After each trial, participants rated the intuitiveness of the command mapping, while post-experiment interviews were used to gather the participants' preferences. Results showed that voice control combined with walking with Spot was the most favored and intuitive, while gesture control while standing caused confusion for left/right commands. Despite this, 29% of participants preferred gesture control, citing task engagement and visual congruence as reasons. An odometry-based analysis revealed that participants aligned behind Spot, particularly in the gesture control condition, when allowed to walk. In conclusion, voice control with walking produced the best outcomes. Improving physical ergonomics and adjusting gesture types could improve the effectiveness of gesture control.
Abstract:Recent advancements in AI have sped up the evolution of versatile robot designs. Chess provides a standardized environment that allows for the evaluation of the influence of robot behaviors on human behavior. This article presents an open-source chess robot for human-robot interaction (HRI) research, specifically focusing on verbal and non-verbal interactions. OpenChessRobot recognizes chess pieces using computer vision, executes moves, and interacts with the human player using voice and robotic gestures. We detail the software design, provide quantitative evaluations of the robot's efficacy and offer a guide for its reproducibility. The code and are accessible on GitHub: https://github.com/renchizhhhh/OpenChessRobot
Abstract:As global demand for fruits and vegetables continues to rise, the agricultural industry faces challenges in securing adequate labor. Robotic harvesting devices offer a promising solution to solve this issue. However, harvesting delicate fruits, notably blackberries, poses unique challenges due to their fragility. This study introduces and evaluates a prototype robotic gripper specifically designed for blackberry harvesting. The gripper features an innovative fabric tube mechanism employing motorized twisting action to gently envelop the fruit, ensuring uniform pressure application and minimizing damage. Three types of tubes were developed, varying in elasticity and compressibility using foam padding, spandex, and food-safe cotton cheesecloth. Performance testing focused on assessing each gripper's ability to detach and release blackberries, with emphasis on quantifying damage rates. Results indicate the proposed gripper achieved an 82% success rate in detaching blackberries and a 95% success rate in releasing them, showcasing the promised potential for robotic harvesting applications.