Abstract:We introduce GeoSACS, a geometric framework for shared autonomy (SA). In variable environments, SA methods can be used to combine robotic capabilities with real-time human input in a way that offloads the physical task from the human. To remain intuitive, it can be helpful to simplify requirements for human input (i.e., reduce the dimensionality), which create challenges for to map low-dimensional human inputs to the higher dimensional control space of robots without requiring large amounts of data. We built GeoSACS on canal surfaces, a geometric framework that represents potential robot trajectories as a canal from as few as two demonstrations. GeoSACS maps user corrections on the cross-sections of this canal to provide an efficient SA framework. We extend canal surfaces to consider orientation and update the control frames to support intuitive mapping from user input to robot motions. Finally, we demonstrate GeoSACS in two preliminary studies, including a complex manipulation task where a robot loads laundry into a washer.
Abstract:In disaster-stricken environments, it's vital to assess the damage quickly, analyse the stability of the environment, and allocate resources to the most vulnerable areas where victims might be present. These missions are difficult and dangerous to be conducted directly by humans. Using the complementary capabilities of both the ground and aerial robots, we investigate a collaborative approach of aerial and ground robots to address this problem. With an increased field of view, faster speed, and compact size, the aerial robot explores the area and creates a 3D feature-based map graph of the environment while providing a live video stream to the ground control station. Once the aerial robot finishes the exploration run, the ground control station processes the map and sends it to the ground robot. The ground robot, with its higher operation time, static stability, payload delivery and tele-conference capabilities, can then autonomously navigate to identified high-vulnerability locations. We have conducted experiments using a quadcopter and a hexapod robot in an indoor modelled environment with obstacles and uneven ground. Additionally, we have developed a low-cost drone add-on with value-added capabilities, such as victim detection, that can be attached to an off-the-shelf drone. The system was assessed for cost-effectiveness, energy efficiency, and scalability.
Abstract:Autonomous navigation systems based on computer vision sensors often require sophisticated robotics platforms which are very expensive. This poses a barrier for the implementation and testing of complex localization, mapping, and navigation algorithms that are vital in robotics applications. Addressing this issue, in this work, Robot Operating System (ROS) supported mobile robotics platforms are compared and an end-to-end implementation of an autonomous navigation system based on a low-cost educational robotics platform, AlphaBot2 is presented, while integrating the Intel RealSense D435 camera. Furthermore, a novel approach to implement dynamic path planners as global path planners in the ROS framework is presented. We evaluate the performance of this approach and highlight the improvements that could be achieved through a dynamic global path planner. This low-cost modified AlphaBot2 robotics platform along with the proposed dynamic global path planning approach will be useful for researchers and students for getting hands-on experience with computer vision-based navigation systems.