Abstract:We propose a novel approach based on Denoising Diffusion Probabilistic Models (DDPMs) to control nonlinear dynamical systems. DDPMs are the state-of-art of generative models that have achieved success in a wide variety of sampling tasks. In our framework, we pose the feedback control problem as a generative task of drawing samples from a target set under control system constraints. The forward process of DDPMs constructs trajectories originating from a target set by adding noise. We learn to control a dynamical system in reverse such that the terminal state belongs to the target set. For control-affine systems without drift, we prove that the control system can exactly track the trajectory of the forward process in reverse, whenever the the Lie bracket based condition for controllability holds. We numerically study our approach on various nonlinear systems and verify our theoretical results. We also conduct numerical experiments for cases beyond our theoretical results on a physics-engine.
Abstract:In Score based Generative Modeling (SGMs), the state-of-the-art in generative modeling, stochastic reverse processes are known to perform better than their deterministic counterparts. This paper delves into the heart of this phenomenon, comparing neural ordinary differential equations (ODEs) and neural stochastic differential equations (SDEs) as reverse processes. We use a control theoretic perspective by posing the approximation of the reverse process as a trajectory tracking problem. We analyze the ability of neural SDEs to approximate trajectories of the Fokker-Planck equation, revealing the advantages of stochasticity. First, neural SDEs exhibit a powerful regularizing effect, enabling $L^2$ norm trajectory approximation surpassing the Wasserstein metric approximation achieved by neural ODEs under similar conditions, even when the reference vector field or score function is not Lipschitz. Applying this result, we establish the class of distributions that can be sampled using score matching in SGMs, relaxing the Lipschitz requirement on the gradient of the data distribution in existing literature. Second, we show that this approximation property is preserved when network width is limited to the input dimension of the network. In this limited width case, the weights act as control inputs, framing our analysis as a controllability problem for neural SDEs in probability density space. This sheds light on how noise helps to steer the system towards the desired solution and illuminates the empirical success of stochasticity in generative modeling.
Abstract:In numerous robotics and mechanical engineering applications, among others, data is often constrained on smooth manifolds due to the presence of rotational degrees of freedom. Common datadriven and learning-based methods such as neural ordinary differential equations (ODEs), however, typically fail to satisfy these manifold constraints and perform poorly for these applications. To address this shortcoming, in this paper we study a class of neural ordinary differential equations that, by design, leave a given manifold invariant, and characterize their properties by leveraging the controllability properties of control affine systems. In particular, using a result due to Agrachev and Caponigro on approximating diffeomorphisms with flows of feedback control systems, we show that any map that can be represented as the flow of a manifold-constrained dynamical system can also be approximated using the flow of manifold-constrained neural ODE, whenever a certain controllability condition is satisfied. Additionally, we show that this universal approximation property holds when the neural ODE has limited width in each layer, thus leveraging the depth of network instead for approximation. We verify our theoretical findings using numerical experiments on PyTorch for the manifolds S2 and the 3-dimensional orthogonal group SO(3), which are model manifolds for mechanical systems such as spacecrafts and satellites. We also compare the performance of the manifold invariant neural ODE with classical neural ODEs that ignore the manifold invariant properties and show the superiority of our approach in terms of accuracy and sample complexity.
Abstract:We consider the controllability problem for the continuity equation, corresponding to neural ordinary differential equations (ODEs), which describes how a probability measure is pushedforward by the flow. We show that the controlled continuity equation has very strong controllability properties. Particularly, a given solution of the continuity equation corresponding to a bounded Lipschitz vector field defines a trajectory on the set of probability measures. For this trajectory, we show that there exist piecewise constant training weights for a neural ODE such that the solution of the continuity equation corresponding to the neural ODE is arbitrarily close to it. As a corollary to this result, we establish that the continuity equation of the neural ODE is approximately controllable on the set of compactly supported probability measures that are absolutely continuous with respect to the Lebesgue measure.
Abstract:As a counterpoint to classical stochastic particle methods for linear diffusion equations, we develop a deterministic particle method for the weighted porous medium equation (WPME) and prove its convergence on bounded time intervals. This generalizes related work on blob methods for unweighted porous medium equations. From a numerical analysis perspective, our method has several advantages: it is meshfree, preserves the gradient flow structure of the underlying PDE, converges in arbitrary dimension, and captures the correct asymptotic behavior in simulations. That our method succeeds in capturing the long time behavior of WPME is significant from the perspective of related problems in quantization. Just as the Fokker-Planck equation provides a way to quantize a probability measure $\bar{\rho}$ by evolving an empirical measure according to stochastic Langevin dynamics so that the empirical measure flows toward $\bar{\rho}$, our particle method provides a way to quantize $\bar{\rho}$ according to deterministic particle dynamics approximating WMPE. In this way, our method has natural applications to multi-agent coverage algorithms and sampling probability measures. A specific case of our method corresponds exactly to the mean-field dynamics of training a two-layer neural network for a radial basis function activation function. From this perspective, our convergence result shows that, in the over parametrized regime and as the variance of the radial basis functions goes to zero, the continuum limit is given by WPME. This generalizes previous results, which considered the case of a uniform data distribution, to the more general inhomogeneous setting. As a consequence of our convergence result, we identify conditions on the target function and data distribution for which convexity of the energy landscape emerges in the continuum limit.
Abstract:In this paper, we propose a probabilistic consensus-based multi-robot search strategy that is robust to communication link failures, and thus is suitable for disaster affected areas. The robots, capable of only local communication, explore a bounded environment according to a random walk modeled by a discrete-time discrete-state (DTDS) Markov chain and exchange information with neighboring robots, resulting in a time-varying communication network topology. The proposed strategy is proved to achieve consensus, here defined as agreement on the presence of a static target, with no assumptions on the connectivity of the communication network. Using numerical simulations, we investigate the effect of the robot population size, domain size, and information uncertainty on the consensus time statistics under this scheme. We also validate our theoretical results with 3D physics-based simulations in Gazebo. The simulations demonstrate that all robots achieve consensus in finite time with the proposed search strategy over a range of robot densities in the environment.
Abstract:In this paper, we present a reinforcement learning approach to designing a control policy for a "leader'' agent that herds a swarm of "follower'' agents, via repulsive interactions, as quickly as possible to a target probability distribution over a strongly connected graph. The leader control policy is a function of the swarm distribution, which evolves over time according to a mean-field model in the form of an ordinary difference equation. The dependence of the policy on agent populations at each graph vertex, rather than on individual agent activity, simplifies the observations required by the leader and enables the control strategy to scale with the number of agents. Two Temporal-Difference learning algorithms, SARSA and Q-Learning, are used to generate the leader control policy based on the follower agent distribution and the leader's location on the graph. A simulation environment corresponding to a grid graph with 4 vertices was used to train and validate the control policies for follower agent populations ranging from 10 to 100. Finally, the control policies trained on 100 simulated agents were used to successfully redistribute a physical swarm of 10 small robots to a target distribution among 4 spatial regions.
Abstract:This paper presents a novel partial differential equation (PDE)-based framework for controlling an ensemble of robots, which have limited sensing and actuation capabilities and exhibit stochastic behaviors, to perform mapping and coverage tasks. We model the ensemble population dynamics as an advection-diffusion-reaction PDE model and formulate the mapping and coverage tasks as identification and control problems for this model. In the mapping task, robots are deployed over a closed domain to gather data, which is unlocalized and independent of robot identities, for reconstructing the unknown spatial distribution of a region of interest. We frame this task as a convex optimization problem whose solution represents the region as a spatially-dependent coefficient in the PDE model. We then consider a coverage problem in which the robots must perform a desired activity at a programmable probability rate to achieve a target spatial distribution of activity over the reconstructed region of interest. We formulate this task as an optimal control problem in which the PDE model is expressed as a bilinear control system, with the robots' coverage activity rate and velocity field defined as the control inputs. We validate our approach with simulations of a combined mapping and coverage scenario in two environments with three target coverage distributions.
Abstract:This paper, the second of a two-part series, presents a method for mean-field feedback stabilization of a swarm of agents on a finite state space whose time evolution is modeled as a continuous time Markov chain (CTMC). The resulting (mean-field) control problem is that of controlling a nonlinear system with desired global stability properties. We first prove that any probability distribution with a strongly connected support can be stabilized using time-invariant inputs. Secondly, we show the asymptotic controllability of all possible probability distributions, including distributions that assign zero density to some states and which do not necessarily have a strongly connected support. Lastly, we demonstrate that there always exists a globally asymptotically stabilizing decentralized density feedback law with the additional property that the control inputs are zero at equilibrium, whenever the graph is strongly connected and bidirected. Then the problem of synthesizing closed-loop polynomial feedback is framed as a optimization problem using state-of-the-art sum-of-squares optimization tools. The optimization problem searches for polynomial feedback laws that make the candidate Lyapunov function a stability certificate for the resulting closed-loop system. Our methodology is tested for two cases on a five vertex graph, and the stabilization properties of the constructed control laws are validated with numerical simulations of the corresponding system of ordinary differential equations.
Abstract:In this paper, we study the controllability and stabilizability properties of the Kolmogorov forward equation of a continuous time Markov chain (CTMC) evolving on a finite state space, using the transition rates as the control parameters. Firstly, we prove small-time local and global controllability from and to strictly positive equilibrium configurations when the underlying graph is strongly connected. Secondly, we show that there always exists a locally exponentially stabilizing decentralized linear (density-)feedback law that takes zero valu at equilibrium and respects the graph structure, provided that the transition rates are allowed to be negative and the desired target density lies in the interior of the set of probability densities. For bidirected graphs, that is, graphs where a directed edge in one direction implies an edge in the opposite direction, we show that this linear control law can be realized using a decentralized rational feedback law of the form k(x) = a(x) + b(x)f(x)/g(x) that also respects the graph structure and control constraints (positivity and zero at equilibrium). This enables the possibility of using Linear Matrix Inequality (LMI) based tools to algorithmically construct decentralized density feedback controllers for stabilization of a robotic swarm to a target task distribution with no task-switching at equilibrium, as we demonstrate with several numerical examples.