Abstract:We propose a human-centered safety filter (HCSF) for shared autonomy that significantly enhances system safety without compromising human agency. Our HCSF is built on a neural safety value function, which we first learn scalably through black-box interactions and then use at deployment to enforce a novel quality control barrier function (Q-CBF) safety constraint. Since this Q-CBF safety filter does not require any knowledge of the system dynamics for both synthesis and runtime safety monitoring and intervention, our method applies readily to complex, black-box shared autonomy systems. Notably, our HCSF's CBF-based interventions modify the human's actions minimally and smoothly, avoiding the abrupt, last-moment corrections delivered by many conventional safety filters. We validate our approach in a comprehensive in-person user study using Assetto Corsa-a high-fidelity car racing simulator with black-box dynamics-to assess robustness in "driving on the edge" scenarios. We compare both trajectory data and drivers' perceptions of our HCSF assistance against unassisted driving and a conventional safety filter. Experimental results show that 1) compared to having no assistance, our HCSF improves both safety and user satisfaction without compromising human agency or comfort, and 2) relative to a conventional safety filter, our proposed HCSF boosts human agency, comfort, and satisfaction while maintaining robustness.
Abstract:We present a decentralized control algorithm for a minimalist robotic swarm lacking memory, explicit communication, or relative position information, to encapsulate multiple diffusive target sources in a bounded environment. The state-of-the-art approaches generally require either local communication or relative localization to provide guarantees of convergence and safety. We quantify trade-offs between task, control, and robot parameters for guaranteed safe convergence to all the sources. Furthermore, our algorithm is robust to occlusions and noise in the sensor measurements as we demonstrate in simulation.
Abstract:We present a decentralized control algorithm for a robotic swarm given the task of encapsulating static and moving targets in a bounded unknown environment. We consider minimalist robots without memory, explicit communication, or localization information. The state-of-the-art approaches generally assume that the robots in the swarm are able to detect the relative position of neighboring robots and targets in order to provide convergence guarantees. In this work, we propose a novel control law for the guaranteed encapsulation of static and moving targets while avoiding all collisions, when the robots do not know the exact relative location of any robot or target in the environment. We make use of the Lyapunov stability theory to prove the convergence of our control algorithm and provide bounds on the ratio between the target and robot speeds. Furthermore, our proposed approach is able to provide stochastic guarantees under the bounds that we determine on task parameters for scenarios where a target moves faster than a robot. Finally, we present an analysis of how the emergent behavior changes with different parameters of the task and noisy sensor readings.
Abstract:We propose a decentralized control algorithm for a minimalistic robotic swarm with limited capabilities such that the desired global behavior emerges. We consider the problem of searching for and encapsulating various targets present in the environment while avoiding collisions with both static and dynamic obstacles. The novelty of this work is the guaranteed generation of desired complex swarm behavior with constrained individual robots which have no memory, no localization, and no knowledge of the exact relative locations of their neighbors. Moreover, we analyze how the emergent behavior changes with different parameters of the task, noise in the sensor reading, and asynchronous execution.