Abstract:Frequency-modulated continuous-wave (FMCW) scanning radar has emerged as an alternative to spinning LiDAR for state estimation on mobile robots. Radar's longer wavelength is less affected by small particulates, providing operational advantages in challenging environments such as dust, smoke, and fog. This paper presents Radar Teach and Repeat (RT&R): a full-stack radar system for long-term off-road robot autonomy. RT&R can drive routes reliably in off-road cluttered areas without any GPS. We benchmark the radar system's closed-loop path-tracking performance and compare it to its 3D LiDAR counterpart. 11.8 km of autonomous driving was completed without interventions using only radar and gyro for navigation. RT&R was evaluated on different routes with progressively less structured scene geometry. RT&R achieved lateral path-tracking root mean squared errors (RMSE) of 5.6 cm, 7.5 cm, and 12.1 cm as the routes became more challenging. On the robot we used for testing, these RMSE values are less than half of the width of one tire (24 cm). These same routes have worst-case errors of 21.7 cm, 24.0 cm, and 43.8 cm. We conclude that radar is a viable alternative to LiDAR for long-term autonomy in challenging off-road scenarios. The implementation of RT&R is open-source and available at: https://github.com/utiasASRL/vtr3.
Abstract:In this paper, we propose the FoMo (For\^et Montmorency) dataset: a comprehensive, multi-season data collection. Located in the Montmorency Forest, Quebec, Canada, our dataset will capture a rich variety of sensory data over six distinct trajectories totaling 6 kilometers, repeated through different seasons to accumulate 42 kilometers of recorded data. The boreal forest environment increases the diversity of datasets for mobile robot navigation. This proposed dataset will feature a broad array of sensor modalities, including lidar, radar, and a navigation-grade Inertial Measurement Unit (IMU), against the backdrop of challenging boreal forest conditions. Notably, the FoMo dataset will be distinguished by its inclusion of seasonal variations, such as changes in tree canopy and snow depth up to 2 meters, presenting new challenges for robot navigation algorithms. Alongside, we will offer a centimeter-level accurate ground truth, obtained through Post Processed Kinematic (PPK) Global Navigation Satellite System (GNSS) correction, facilitating precise evaluation of odometry and localization algorithms. This work aims to spur advancements in autonomous navigation, enabling the development of robust algorithms capable of handling the dynamic, unstructured environments characteristic of boreal forests. With a public odometry and localization leaderboard and a dedicated software suite, we invite the robotics community to engage with the FoMo dataset by exploring new frontiers in robot navigation under extreme environmental variations. We seek feedback from the community based on this proposal to make the dataset as useful as possible. For further details and supplementary materials, please visit https://norlab-ulaval.github.io/FoMo-website/.
Abstract:This paper presents an approach for applying camera perception techniques to spinning LiDAR data. To improve the robustness of long-term change detection from a 3D LiDAR, range and intensity information are rendered into virtual perspectives using a pinhole camera model. Hue-saturation-value image encoding is used to colourize the images by range and near-IR intensity. The LiDAR's active scene illumination makes it invariant to ambient brightness, which enables night-to-day change detection without additional processing. Using the colourized, perspective range image allows existing foundation models to detect semantic regions. Specifically, the Segment Anything Model detects semantically similar regions in both a previously acquired map and live view from a path-repeating robot. By comparing the masks in both views, changes in the live scan are detected. Results indicate that the Segment Anything Model is capable of accurately capturing the shape of arbitrary changes introduced into scenes. The system achieves an object recall of 82.6% and a precision of 47.0%. Changes can be detected through day-to-night illumination variations reliably. After pixel-level masks are generated, the one-to-one correspondence with 3D points means that the 2D masks can be directly used to recover the 3D location of the changes. Eventually, the detected 3D changes can be avoided by treating them as obstacles in a local motion planner.
Abstract:This paper presents a fully unsupervised deep change detection approach for mobile robots with 3D LiDAR. In unstructured environments, it is infeasible to define a closed set of semantic classes. Instead, semantic segmentation is reformulated as binary change detection. We develop a neural network, RangeNetCD, that uses an existing point-cloud map and a live LiDAR scan to detect scene changes with respect to the map. Using a novel loss function, existing point-cloud semantic segmentation networks can be trained to perform change detection without any labels or assumptions about local semantics. We demonstrate the performance of this approach on data from challenging terrains; mean intersection over union (mIoU) scores range between 67.4% and 82.2% depending on the amount of environmental structure. This outperforms the geometric baseline used in all experiments. The neural network runs faster than 10Hz and is integrated into a robot's autonomy stack to allow safe navigation around obstacles that intersect the planned path. In addition, a novel method for the rapid automated acquisition of per-point ground-truth labels is described. Covering changed parts of the scene with retroreflective materials and applying a threshold filter to the intensity channel of the LiDAR allows for quantitative evaluation of the change detector.