Abstract:Applications of reinforcement learning (RL) are popular in autonomous driving tasks. That being said, tuning the performance of an RL agent and guaranteeing the generalization performance across variety of different driving scenarios is still largely an open problem. In particular, getting good performance on complex road and weather conditions require exhaustive tuning and computation time. Curriculum RL, which focuses on solving simpler automation tasks in order to transfer knowledge to complex tasks, is attracting attention in RL community. The main contribution of this paper is a systematic study for investigating the value of curriculum reinforcement learning in autonomous driving applications. For this purpose, we setup several different driving scenarios in a realistic driving simulator, with varying road complexity and weather conditions. Next, we train and evaluate performance of RL agents on different sequences of task combinations and curricula. Results show that curriculum RL can yield significant gains in complex driving tasks, both in terms of driving performance and sample complexity. Results also demonstrate that different curricula might enable different benefits, which hints future research directions for automated curriculum training.
Abstract:Automated lane changing is a critical feature for advanced autonomous driving systems. In recent years, reinforcement learning (RL) algorithms trained on traffic simulators yielded successful results in computing lane changing policies that strike a balance between safety, agility and compensating for traffic uncertainty. However, many RL algorithms exhibit simulator bias and policies trained on simple simulators do not generalize well to realistic traffic scenarios. In this work, we develop a data driven traffic simulator by training a generative adverserial network (GAN) on real life trajectory data. The simulator generates randomized trajectories that resembles real life traffic interactions between vehicles, which enables training the RL agent on much richer and realistic scenarios. We demonstrate through simulations that RL agents that are trained on GAN-based traffic simulator has stronger generalization capabilities compared to RL agents trained on simple rule-driven simulators.