Abstract:Haptic teleoperations play a key role in extending human capabilities to perform complex tasks remotely, employing a robotic system. The impact of haptics is far-reaching and can improve the sensory awareness and motor accuracy of the operator. In this context, a key challenge is attaining a natural, stable and safe haptic human-robot interaction. Achieving these conflicting requirements is particularly crucial for complex procedures, e.g. medical ones. To address this challenge, in this work we develop a novel haptic bilateral teleoperation system (HBTS), featuring a virtualized force feedback, based on the motion error generated by an admittance controlled robot. This approach allows decoupling the force rendering system from the control of the interaction: the rendered force is assigned with the desired dynamics, while the admittance control parameters are separately tuned to maximize interaction performance. Furthermore, recognizing the necessity to limit the forces exerted by the robot on the environment, to ensure a safe interaction, we embed a saturation strategy of the motion references provided by the haptic device to admittance control. We validate the different aspects of the proposed HBTS, through a teleoperated blackboard writing experiment, against two other architectures. The results indicate that the proposed HBTS improves the naturalness of teleoperation, as well as safety and accuracy of the interaction.
Abstract:A key problem toward the use of microorganisms as bio-factories is reaching and maintaining cellular communities at a desired density and composition so that they can efficiently convert their biomass into useful compounds. Promising technological platforms for the real time, scalable control of cellular density are bioreactors. In this work, we developed a learning-based strategy to expand the toolbox of available control algorithms capable of regulating the density of a \textit{single} bacterial population in bioreactors. Specifically, we used a sim-to-real paradigm, where a simple mathematical model, calibrated using a few data, was adopted to generate synthetic data for the training of the controller. The resulting policy was then exhaustively tested in vivo using a low-cost bioreactor known as Chi.Bio, assessing performance and robustness. In addition, we compared the performance with more traditional controllers (namely, a PI and an MPC), confirming that the learning-based controller exhibits similar performance in vivo. Our work showcases the viability of learning-based strategies for the control of cellular density in bioreactors, making a step forward toward their use for the control of the composition of microbial consortia.
Abstract:In addressing control problems such as regulation and tracking through reinforcement learning, it is often required to guarantee that the acquired policy meets essential performance and stability criteria such as a desired settling time and steady-state error prior to deployment. Motivated by this necessity, we present a set of results and a systematic reward shaping procedure that (i) ensures the optimal policy generates trajectories that align with specified control requirements and (ii) allows to assess whether any given policy satisfies them. We validate our approach through comprehensive numerical experiments conducted in two representative environments from OpenAI Gym: the Inverted Pendulum swing-up problem and the Lunar Lander. Utilizing both tabular and deep reinforcement learning methods, our experiments consistently affirm the efficacy of our proposed framework, highlighting its effectiveness in ensuring policy adherence to the prescribed control requirements.
Abstract:This paper is concerned with a finite-horizon inverse control problem, which has the goal of inferring, from observations, the possibly non-convex and non-stationary cost driving the actions of an agent. In this context, we present a result that enables cost estimation by solving an optimization problem that is convex even when the agent cost is not and when the underlying dynamics is nonlinear, non-stationary and stochastic. To obtain this result, we also study a finite-horizon forward control problem that has randomized policies as decision variables. For this problem, we give an explicit expression for the optimal solution. Moreover, we turn our findings into algorithmic procedures and we show the effectiveness of our approach via both in-silico and experimental validations with real hardware. All the experiments confirm the effectiveness of our approach.
Abstract:In this article, we provide a novel and broadly-applicable contraction-theoretic approach to continuous-time time-varying convex optimization. For any parameter-dependent contracting dynamics, we show that the tracking error between any solution trajectory and the equilibrium trajectory is uniformly upper bounded in terms of the contraction rate, the Lipschitz constant in which the parameter appears, and the rate of change of the parameter. To apply this result to time-varying convex optimization problems, we establish the strong infinitesimal contraction of dynamics solving three canonical problems, namely monotone inclusions, linear equality-constrained problems, and composite minimization problems. For each of these problems, we prove the sharpest-known rates of contraction and provide explicit tracking error bounds between solution trajectories and minimizing trajectories. We validate our theoretical results on two numerical examples.
Abstract:One of the major challenges in Deep Reinforcement Learning for control is the need for extensive training to learn the policy. Motivated by this, we present the design of the Control-Tutored Deep Q-Networks (CT-DQN) algorithm, a Deep Reinforcement Learning algorithm that leverages a control tutor, i.e., an exogenous control law, to reduce learning time. The tutor can be designed using an approximate model of the system, without any assumption about the knowledge of the system's dynamics. There is no expectation that it will be able to achieve the control objective if used stand-alone. During learning, the tutor occasionally suggests an action, thus partially guiding exploration. We validate our approach on three scenarios from OpenAI Gym: the inverted pendulum, lunar lander, and car racing. We demonstrate that CT-DQN is able to achieve better or equivalent data efficiency with respect to the classic function approximation solutions.
Abstract:We investigate the problem of using a learning-based strategy to stabilize a synthetic toggle switch via an external control approach. To overcome the data efficiency problem that would render the algorithm unfeasible for practical use in synthetic biology, we adopt a sim-to-real paradigm where the policy is learnt via training on a simplified model of the toggle switch and it is then subsequently exploited to control a more realistic model of the switch parameterized from in-vivo experiments. Our in-silico experiments confirm the viability of the approach suggesting its potential use for in-vivo control implementations.
Abstract:We present a discretized design that expounds an algorithm recently introduced in Gagliardi and Russo (2021) to synthesize control policies from examples for constrained, possibly stochastic and nonlinear, systems. The constraints do not need to be fulfilled in the possibly noisy example data, which in turn might be collected from a system that is different from the one under control. For this discretized design, we discuss a number of properties and give a design pipeline. The design, which we term as discrete fully probabilistic design, is benchmarked numerically on an example that involves controlling an inverted pendulum with actuation constraints starting from data collected from a physically different pendulum that does not satisfy the system-specific actuation constraints.
Abstract:We introduce a control-tutored reinforcement learning (CTRL) algorithm. The idea is to enhance tabular learning algorithms by means of a control strategy with limited knowledge of the system model. By tutoring the learning process, the learning rate can be substantially reduced. We use the classical problem of stabilizing an inverted pendulum as a benchmark to numerically illustrate the advantages and disadvantages of the approach.
Abstract:We introduce a control-tutored reinforcement learning (CTRL) algorithm. The idea is to enhance tabular learning algorithms so as to improve the exploration of the state-space, and substantially reduce learning times by leveraging some limited knowledge of the plant encoded into a tutoring model-based control strategy. We illustrate the benefits of our novel approach and its effectiveness by using the problem of controlling one or more agents to herd and contain within a goal region a set of target free-roving agents in the plane.