Abstract:We address the decision-making capability within an end-to-end planning framework that focuses on motion prediction, decision-making, and trajectory planning. Specifically, we formulate decision-making and trajectory planning as a differentiable nonlinear optimization problem, which ensures compatibility with learning-based modules to establish an end-to-end trainable architecture. This optimization introduces explicit objectives related to safety, traveling efficiency, and riding comfort, guiding the learning process in our proposed pipeline. Intrinsic constraints resulting from the decision-making task are integrated into the optimization formulation and preserved throughout the learning process. By integrating the differentiable optimizer with a neural network predictor, the proposed framework is end-to-end trainable, aligning various driving tasks with ultimate performance goals defined by the optimization objectives. The proposed framework is trained and validated using the Waymo Open Motion dataset. The open-loop testing reveals that while the planning outcomes using our method do not always resemble the expert trajectory, they consistently outperform baseline approaches with improved safety, traveling efficiency, and riding comfort. The closed-loop testing further demonstrates the effectiveness of optimizing decisions and improving driving performance. Ablation studies demonstrate that the initialization provided by the learning-based prediction module is essential for the convergence of the optimizer as well as the overall driving performance.
Abstract:This paper introduces a local planner that synergizes the decision making and trajectory planning modules towards autonomous driving. The decision making and trajectory planning tasks are jointly formulated as a nonlinear programming problem with an integrated objective function. However, integrating the discrete decision variables into the continuous trajectory optimization leads to a mixed-integer programming (MIP) problem with inherent nonlinearity and nonconvexity. To address the challenge in solving the problem, the original problem is decomposed into two sub-stages, and a two-stage optimization (TSO) based approach is presented to ensure the coherence in outcomes for the two stages. The optimization problem in the first stage determines the optimal decision sequence that acts as an informed initialization. With the outputs from the first stage, the second stage necessitates the use of a high-fidelity vehicle model and strict enforcement of the collision avoidance constraints as part of the trajectory planning problem. We evaluate the effectiveness of our proposed planner across diverse multi-lane scenarios. The results demonstrate that the proposed planner simultaneously generates a sequence of optimal decisions and the corresponding trajectory that significantly improves driving performance in terms of driving safety and traveling efficiency as compared to alternative methods. Additionally, we implement the closed-loop simulation in CARLA, and the results showcase the effectiveness of the proposed planner to adapt to changing driving situations with high computational efficiency.
Abstract:For autonomous driving in highly dynamic environments, it is anticipated to predict the future behaviors of surrounding vehicles (SVs) and make safe and effective decisions. However, modeling the inherent coupling effect between the prediction and decision-making modules has been a long-standing challenge, especially when there is a need to maintain appropriate computational efficiency. To tackle these problems, we propose a novel integrated intention prediction and decision-making approach, which explicitly models the coupling relationship and achieves efficient computation. Specifically, a spectrum attention net is designed to predict the intentions of SVs by capturing the trends of each frequency component over time and their interrelations. Fast computation of the intention prediction module is attained as the predicted intentions are not decoded to trajectories in the executing process. Furthermore, the proximal policy optimization (PPO) algorithm is employed to address the non-stationary problem in the framework through a modest policy update enabled by a clipping mechanism within its objective function. On the basis of these developments, the intention prediction and decision-making modules are integrated through joint learning. Experiments are conducted in representative traffic scenarios, and the results reveal that the proposed integrated framework demonstrates superior performance over several deep reinforcement learning (DRL) baselines in terms of success rate, efficiency, and safety in driving tasks.