Abstract:This paper introduces LIVE: Lidar Informed Visual Search focused on the problem of multi-robot (MR) planning and execution for robust visual detection of multiple objects. We perform extensive real-world experiments with a two-robot team in an indoor apartment setting. LIVE acts as a perception module that detects unmapped obstacles, or Short Term Features (STFs), in Lidar observations. STFs are filtered, resulting in regions to be visually inspected by modifying plans online. Lidar Coverage Path Planning (CPP) is employed for generating highly efficient global plans for heterogeneous robot teams. Finally, we present a data model and a demonstration dataset, which can be found by visiting our project website https://sites.google.com/view/live-iros2023/home.
Abstract:This paper presents LIVES: LiDAR Informed Visual Search, an autonomous planner for unknown environments. We consider the pixel-wise environment perception problem where one is given 2D range data from LiDAR scans and must label points contextually as map or non-map in the surroundings for visual planning. LIVES classifies incoming 2D scans from the wide Field of View (FoV) LiDAR in unseen environments without prior map information. The map-generalizable classifier is trained from expert data collected using a simple cart platform equipped with a map-based classifier in real environments. A visual planner takes contextual data from scans and uses this information to plan viewpoints more likely to yield detection of the search target. While conventional frontier based methods for LiDAR and multi sensor exploration effectively map environments, they are not tailored to search for people indoors, which we investigate in this paper. LIVES is baselined against several existing exploration methods in simulation to verify its performance. Finally, it is validated in real-world experiments with a Spot robot in a 20x30m indoor apartment setting. Videos of experimental validation can be found on our project website at https://sites.google.com/view/lives-icra-2024/home.
Abstract:Navigation strategies that intentionally incorporate contact with humans (i.e. "contact-based" social navigation) in crowded environments are largely unexplored even though collision-free social navigation is a well studied problem. Traditional social navigation frameworks require the robot to stop suddenly or "freeze" whenever a collision is imminent. This paradigm poses two problems: 1) freezing while navigating a crowd may cause people to trip and fall over the robot, resulting in more harm than the collision itself, and 2) in very dense social environments where collisions are unavoidable, such a control scheme would render the robot unable to move and preclude the opportunity to study how humans incorporate robots into these environments. However, if robots are to be meaningfully included in crowded social spaces, such as busy streets, subways, stores, or other densely populated locales, there may not exist trajectories that can guarantee zero collisions. Thus, adoption of robots in these environments requires the development of minimally disruptive navigation plans that can safely plan for and respond to contacts. We propose a learning-based motion planner and control scheme to navigate dense social environments using safe contacts for an omnidirectional mobile robot. The planner is evaluated in simulation over 360 trials with crowd densities varying between 0.0 and 1.6 people per square meter. Our navigation scheme is able to use contact to safely navigate in crowds of higher density than has been previously reported, to our knowledge.