Abstract:This paper introduces MILUV, a Multi-UAV Indoor Localization dataset with UWB and Vision measurements. This dataset comprises 217 minutes of flight time over 36 experiments using three quadcopters, collecting ultra-wideband (UWB) ranging data such as the raw timestamps and channel-impulse response data, vision data from a stereo camera and a bottom-facing monocular camera, inertial measurement unit data, height measurements from a laser rangefinder, magnetometer data, and ground-truth poses from a motion-capture system. The UWB data is collected from up to 12 transceivers affixed to mobile robots and static tripods in both line-of-sight and non-line-of-sight conditions. The UAVs fly at a maximum speed of 4.418 m/s in an indoor environment with visual fiducial markers as features. MILUV is versatile and can be used for a wide range of applications beyond localization, but the primary purpose of MILUV is for testing and validating multi-robot UWB- and vision-based localization algorithms. The dataset can be downloaded at https://doi.org/10.25452/figshare.plus.28386041.v1. A development kit is presented alongside the MILUV dataset, which includes benchmarking algorithms such as visual-inertial odometry, UWB-based localization using an extended Kalman filter, and classification of CIR data using machine learning approaches. The development kit can be found at https://github.com/decargroup/miluv, and is supplemented with a website available at https://decargroup.github.io/miluv/.
Abstract:This paper introduces a set of customizable and novel cost functions that enable the user to easily specify desirable robot formations, such as a ``high-coverage'' infrastructure-inspection formation, while maintaining high relative pose estimation accuracy. The overall cost function balances the need for the robots to be close together for good ranging-based relative localization accuracy and the need for the robots to achieve specific tasks, such as minimizing the time taken to inspect a given area. The formations found by minimizing the aggregated cost function are evaluated in a coverage path planning task in simulation and experiment, where the robots localize themselves and unknown landmarks using a simultaneous localization and mapping algorithm based on the extended Kalman filter. Compared to an optimal formation that maximizes ranging-based relative localization accuracy, these formations significantly reduce the time to cover a given area with minimal impact on relative pose estimation accuracy.