Abstract:Robots interacting with humans must be safe, reactive and adapt online to unforeseen environmental and task changes. Achieving these requirements concurrently is a challenge as interactive planners lack formal safety guarantees, while safe motion planners lack flexibility to adapt. To tackle this, we propose a modular control architecture that generates both safe and reactive motion plans for human-robot interaction by integrating temporal logic-based discrete task level plans with continuous Dynamical System (DS)-based motion plans. We formulate a reactive temporal logic formula that enables users to define task specifications through structured language, and propose a planning algorithm at the task level that generates a sequence of desired robot behaviors while being adaptive to environmental changes. At the motion level, we incorporate control Lyapunov functions and control barrier functions to compute stable and safe continuous motion plans for two types of robot behaviors: (i) complex, possibly periodic motions given by autonomous DS and (ii) time-critical tasks specified by Signal Temporal Logic~(STL). Our methodology is demonstrated on the Franka robot arm performing wiping tasks on a whiteboard and a mannequin that is compliant to human interactions and adaptive to environmental changes.
Abstract:A learning-based modular motion planning pipeline is presented that is compliant, safe, and reactive to perturbations at task execution. A nominal motion plan, defined as a nonlinear autonomous dynamical system (DS), is learned offline from kinesthetic demonstrations using a Neural Ordinary Differential Equation (NODE) model. To ensure both stability and safety during inference, a novel approach is proposed which selects a target point at each time step for the robot to follow, using a time-varying target trajectory generated by the learned NODE. A correction term to the NODE model is computed online by solving a Quadratic Program that guarantees stability and safety using Control Lyapunov Functions and Control Barrier Functions, respectively. Our approach outperforms baseline DS learning techniques on the LASA handwriting dataset and is validated on real-robot experiments where it is shown to produce stable motions, such as wiping and stirring, while being robust to physical perturbations and safe around humans and obstacles.