Abstract:A fundamental prerequisite for safe and efficient navigation of mobile robots is the availability of reliable navigation maps upon which trajectories can be planned. With the increasing industrial interest in mobile robotics, especially in urban environments, the process of generating navigation maps has become of particular interest, being a labor intensive step of the deployment process. Automating this step is challenging and becomes even more arduous when the perception capabilities are limited by cost considerations. This paper proposes an algorithm to automatically generate navigation maps using a typical navigation-oriented sensor setup: a single top-mounted 3D LiDAR sensor. The proposed method is designed and validated with the urban environment as the main use case: it is shown to be able to produce accurate maps featuring different terrain types, positive obstacles of different heights as well as negative obstacles. The algorithm is applied to data collected in a typical urban environment with a wheeled inverted pendulum robot, showing its robustness against localization, perception and dynamic uncertainties. The generated map is validated against a human-made map.
Abstract:Despite the number of works published in recent years, vehicle localization remains an open, challenging problem. While map-based localization and SLAM algorithms are getting better and better, they remain a single point of failure in typical localization pipelines. This paper proposes a modular localization architecture that fuses sensor measurements with the outputs of off-the-shelf localization algorithms. The fusion filter estimates model uncertainties to improve odometry in case absolute pose measurements are lost entirely. The architecture is validated experimentally on a real robot navigating autonomously proving a reduction of the position error of more than 90% with respect to the odometrical estimate without uncertainty estimation in a two-minute navigation period without position measurements.
Abstract:This paper presents a novel multi-modal Multi-Object Tracking (MOT) algorithm for self-driving cars that combines camera and LiDAR data. Camera frames are processed with a state-of-the-art 3D object detector, whereas classical clustering techniques are used to process LiDAR observations. The proposed MOT algorithm comprises a three-step association process, an Extended Kalman filter for estimating the motion of each detected dynamic obstacle, and a track management phase. The EKF motion model requires the current measured relative position and orientation of the observed object and the longitudinal and angular velocities of the ego vehicle as inputs. Unlike most state-of-the-art multi-modal MOT approaches, the proposed algorithm does not rely on maps or knowledge of the ego global pose. Moreover, it uses a 3D detector exclusively for cameras and is agnostic to the type of LiDAR sensor used. The algorithm is validated both in simulation and with real-world data, with satisfactory results.
Abstract:State-of-the-art vehicle dynamics estimation techniques usually share one common drawback: each variable to estimate is computed with an independent, simplified filtering module. These modules run in parallel and need to be calibrated separately. To solve this issue, a unified Twin-in-the-Loop (TiL) Observer architecture has recently been proposed: the classical simplified control-oriented vehicle model in the estimators is replaced by a full-fledged vehicle simulator, or digital twin (DT). The states of the DT are corrected in real time with a linear time invariant output error law. Since the simulator is a black-box, no explicit analytical formulation is available, hence classical filter tuning techniques cannot be used. Due to this reason, Bayesian Optimization will be used to solve a data-driven optimization problem to tune the filter. Due to the complexity of the DT, the optimization problem is high-dimensional. This paper aims to find a procedure to tune the high-complexity observer by lowering its dimensionality. In particular, in this work we will analyze both a supervised and an unsupervised learning approach. The strategies have been validated for speed and yaw-rate estimation on real-world data.
Abstract:This paper presents a vehicle-independent, non-intrusive, and light monitoring system for accurately measuring fuel consumption in road vehicles from longitudinal speed and acceleration derived continuously in time from GNSS and IMU sensors mounted inside the vehicle. In parallel to boosting the transition to zero-carbon cars, there is an increasing interest in low-cost instruments for precise measurement of the environmental impact of the many internal combustion engine vehicles still in circulation. The main contribution of this work is the design and comparison of two innovative black-box algorithms, one based on a reduced complexity physics modeling while the other relying on a feedforward neural network for black-box fuel consumption estimation using only velocity and acceleration measurements. Based on suitable metrics, the developed algorithms outperform the state of the art best approach, both in the instantaneous and in the integral fuel consumption estimation, with errors smaller than 1\% with respect to the fuel flow ground truth. The data used for model identification, testing, and experimental validation is composed of GNSS velocity and IMU acceleration measurements collected during several trips using a diesel fuel vehicle on different roads, in different seasons, and with varying numbers of passengers. Compared to built-in vehicle monitoring systems, this methodology is not customized, uses off-the-shelf sensors, and is based on two simple algorithms that have been validated offline and could be easily implemented in a real-time environment.