Abstract:This paper studies the adaptive optimal stationary control of continuous-time linear stochastic systems with both additive and multiplicative noises, using reinforcement learning techniques. Based on policy iteration, a novel off-policy reinforcement learning algorithm, named optimistic least-squares-based policy iteration, is proposed which is able to iteratively find near-optimal policies of the adaptive optimal stationary control problem directly from input/state data without explicitly identifying any system matrices, starting from an initial admissible control policy. The solutions given by the proposed optimistic least-squares-based policy iteration are proved to converge to a small neighborhood of the optimal solution with probability one, under mild conditions. The application of the proposed algorithm to a triple inverted pendulum example validates its feasibility and effectiveness.
Abstract:This paper studies the robustness aspect of reinforcement learning algorithms in the presence of errors. Specifically, we revisit the benchmark problem of discrete-time linear quadratic regulation (LQR) and study the long-standing open question: Under what conditions is the policy iteration method robustly stable for dynamical systems with unbounded, continuous state and action spaces? Using advanced stability results in control theory, it is shown that policy iteration for LQR is inherently robust to small errors and enjoys local input-to-state stability: whenever the error in each iteration is bounded and small, the solutions of the policy iteration algorithm are also bounded, and, moreover, enter and stay in a small neighborhood of the optimal LQR solution. As an application, a novel off-policy optimistic least-squares policy iteration for the LQR problem is proposed, when the system dynamics are subjected to additive stochastic disturbances. The proposed new results in robust reinforcement learning are validated by a numerical example.
Abstract:In this paper, a new reinforcement learning (RL) method known as the method of temporal differential is introduced. Compared to the traditional temporal-difference learning method, it plays a crucial role in developing novel RL techniques for continuous environments. In particular, the continuous-time least squares policy evaluation (CT-LSPE) and the continuous-time temporal-differential (CT-TD) learning methods are developed. Both theoretical and empirical evidences are provided to demonstrate the effectiveness of the proposed temporal-differential learning methodology.
Abstract:This published paper investigates the distributed tracking control problem for a class of Euler-Lagrange multi-agent systems when the agents can only measure the positions. In this case, the lack of the separation principle and the strong nonlinearity in unmeasurable states pose severe technical challenges to global output-feedback control design. To overcome these difficulties, a global nonsingular coordinate transformation matrix in the upper triangular form is firstly proposed such that the nonlinear dynamic model can be partially linearized with respect to the unmeasurable states. And, a new type of velocity observers is designed to estimate the unmeasurable velocities for each system. Then, based on the outputs of the velocity observers, we propose distributed control laws that enable the coordinated tracking control system to achieve uniform global exponential stability (UGES). Both theoretical analysis and numerical simulations are presented to validate the effectiveness of the proposed control scheme. Followed by the original paper, a typo and a mistake is corrected.