Abstract:Merely pursuing performance may adversely affect the safety, while a conservative policy for safe exploration will degrade the performance. How to balance the safety and performance in learning-based control problems is an interesting yet challenging issue. This paper aims to enhance system performance with safety guarantee in solving the reinforcement learning (RL)-based optimal control problems of nonlinear systems subject to high-relative-degree state constraints and unknown time-varying disturbance/actuator faults. First, to combine control barrier functions (CBFs) with RL, a new type of CBFs, termed high-order reciprocal control barrier function (HO-RCBF) is proposed to deal with high-relative-degree constraints during the learning process. Then, the concept of gradient similarity is proposed to quantify the relationship between the gradient of safety and the gradient of performance. Finally, gradient manipulation and adaptive mechanisms are introduced in the safe RL framework to enhance the performance with a safety guarantee. Two simulation examples illustrate that the proposed safe RL framework can address high-relative-degree constraint, enhance safety robustness and improve system performance.
Abstract:In this paper, a bipartite output regulation problem is solved for a class of nonlinear multi-agent systems subject to static signed communication networks. A nonlinear distributed observer is proposed for a nonlinear exosystem with cooperation-competition interactions to address the problem. Sufficient conditions are provided to guarantee its existence and stability. The exponential stability of the observer is established. As a practical application, a leader-following bipartite consensus problem is solved for a class of nonlinear multi-agent systems based on the observer. Finally, a network of multiple pendulum systems is treated to support the feasibility of the proposed design. The possible application of the approach to generate specific Turing patterns is also presented.
Abstract:Knowledge-based leader-following synchronization problem of heterogeneous nonlinear multi-agent systems is challenging since the leader's dynamic information is unknown to all follower nodes. This paper proposes a learning-based fully distributed observer for a class of nonlinear leader systems, which can simultaneously learn the leader's dynamics and states. The class of leader dynamics considered here does not require a bounded Jacobian matrix. Based on this learning-based distributed observer, we further synthesize an adaptive distributed control law for solving the leader-following synchronization problem of multiple Euler-Lagrange systems subject to an uncertain nonlinear leader system. The results are illustrated by a simulation example.
Abstract:Mixed time- and event-triggered cooperative output regulation for heterogeneous distributed systems is investigated in this paper. A distributed observer with time-triggered observations is proposed to estimate the state of the leader, and an auxiliary observer with event-triggered communication is designed to reduce the information exchange among followers. A necessary and sufficient condition for the existence of desirable time-triggered observers is established, and delicate relationships among sampling periods, topologies, and reference signals are revealed. An event-triggering mechanism based on local sampled data is proposed to regulate the communication among agents; and the convergence of the estimation errors under the mechanism holds for a class of positive and convergent triggering functions, which include the commonly used exponential function as a special case. The mixed time- and event-triggered system naturally excludes the existence of Zeno behavior as the system updates at discrete instants. When the triggering function is bounded by exponential functions, analytical characterization of the relationship among sampling, event triggering, and inter-event behaviour is established. Finally, several examples are provided to illustrate the effectiveness and merits of the theoretical results.