Abstract:Deep reinforcement learning (DRL) allows a system to interact with its environment and take actions by training an efficient policy that maximizes self-defined rewards. In autonomous driving, it can be used as a strategy for high-level decision making, whereas low-level algorithms such as the hybrid A* path planning have proven their ability to solve the local trajectory planning problem. In this work, we combine these two methods where the DRL makes high-level decisions such as lane change commands. After obtaining the lane change command, the hybrid A* planner is able to generate a collision-free trajectory to be executed by a model predictive controller (MPC). In addition, the DRL algorithm is able to keep the lane change command consistent within a chosen time-period. Traffic rules are implemented using linear temporal logic (LTL), which is then utilized as a reward function in DRL. Furthermore, we validate the proposed method on a real system to demonstrate its feasibility from simulation to implementation on real hardware.
Abstract:Global navigation satellite systems readily provide accurate position information when localizing a robot outdoors. However, an analogous standard solution does not exist yet for mobile robots operating indoors. This paper presents an integrated framework for indoor localization and experimental validation of an autonomous driving system based on an advanced driver-assistance system (ADAS) model car. The global pose of the model car is obtained by fusing information from fiducial markers, inertial sensors and wheel odometry. In order to achieve robust localization, we investigate and compare two extensions to the Extended Kalman Filter; first with adaptive noise tuning and second with Chi-squared test for measurement outlier detection. An efficient and low-cost ground truth measurement method using a single LiDAR sensor is also proposed to validate the results. The performance of the localization algorithms is tested on a complete autonomous driving system with trajectory planning and model predictive control.