Abstract:Deep learning techniques have significantly advanced in providing accurate visual odometry solutions by leveraging large datasets. However, generating uncertainty estimates for these methods remains a challenge. Traditional sensor fusion approaches in a Bayesian framework are well-established, but deep learning techniques with millions of parameters lack efficient methods for uncertainty estimation. This paper addresses the issue of uncertainty estimation for pre-trained deep-learning models in monocular visual odometry. We propose formulating a factor graph on an implicit layer of the deep learning network to recover relative covariance estimates, which allows us to determine the covariance of the Visual Odometry (VO) solution. We showcase the consistency of the deep learning engine's covariance approximation with an empirical analysis of the covariance model on the EUROC datasets to demonstrate the correctness of our formulation.
Abstract:Robustness in Simultaneous Localization and Mapping (SLAM) remains one of the key challenges for the real-world deployment of autonomous systems. SLAM research has seen significant progress in the last two and a half decades, yet many state-of-the-art (SOTA) algorithms still struggle to perform reliably in real-world environments. There is a general consensus in the research community that we need challenging real-world scenarios which bring out different failure modes in sensing modalities. In this paper, we present a novel multi-modal indoor SLAM dataset covering challenging common scenarios that a robot will encounter and should be robust to. Our data was collected with a mobile robotics platform across multiple floors at Northeastern University's ISEC building. Such a multi-floor sequence is typical of commercial office spaces characterized by symmetry across floors and, thus, is prone to perceptual aliasing due to similar floor layouts. The sensor suite comprises seven global shutter cameras, a high-grade MEMS inertial measurement unit (IMU), a ZED stereo camera, and a 128-channel high-resolution lidar. Along with the dataset, we benchmark several SLAM algorithms and highlight the problems faced during the runs, such as perceptual aliasing, visual degradation, and trajectory drift. The benchmarking results indicate that parts of the dataset work well with some algorithms, while other data sections are challenging for even the best SOTA algorithms. The dataset is available at https://github.com/neufieldrobotics/NUFR-M3F.