Abstract:Uncertainties in the environment and behavior model inaccuracies compromise the state estimation of a dynamic obstacle and its trajectory predictions, introducing biases in estimation and shifts in predictive distributions. Addressing these challenges is crucial to safely control an autonomous system. In this paper, we propose a novel algorithm SIED-MPC, which synergistically integrates Simultaneous State and Input Estimation (SSIE) and Distributionally Robust Model Predictive Control (DR-MPC) using model confidence evaluation. The SSIE process produces unbiased state estimates and optimal input gap estimates to assess the confidence of the behavior model, defining the ambiguity radius for DR-MPC to handle predictive distribution shifts. This systematic confidence evaluation leads to producing safe inputs with an adequate level of conservatism. Our algorithm demonstrated a reduced collision rate in autonomous driving simulations through improved state estimation, with a 54% shorter average computation time.
Abstract:We introduce $\mathcal{L}_1$-MBRL, a control-theoretic augmentation scheme for Model-Based Reinforcement Learning (MBRL) algorithms. Unlike model-free approaches, MBRL algorithms learn a model of the transition function using data and use it to design a control input. Our approach generates a series of approximate control-affine models of the learned transition function according to the proposed switching law. Using the approximate model, control input produced by the underlying MBRL is perturbed by the $\mathcal{L}_1$ adaptive control, which is designed to enhance the robustness of the system against uncertainties. Importantly, this approach is agnostic to the choice of MBRL algorithm, enabling the use of the scheme with various MBRL algorithms. MBRL algorithms with $\mathcal{L}_1$ augmentation exhibit enhanced performance and sample efficiency across multiple MuJoCo environments, outperforming the original MBRL algorithms, both with and without system noise.