Abstract:Reinforcement Learning (RL) has enabled social robots to generate trajectories without human-designed rules or interventions, which makes it more effective than hard-coded systems for generalizing to complex real-world scenarios. However, social navigation is a safety-critical task that requires robots to avoid collisions with pedestrians while previous RL-based solutions fall short in safety performance in complex environments. To enhance the safety of RL policies, to the best of our knowledge, we propose the first algorithm, SoNIC, that integrates adaptive conformal inference (ACI) with constrained reinforcement learning (CRL) to learn safe policies for social navigation. More specifically, our method augments RL observations with ACI-generated nonconformity scores and provides explicit guidance for agents to leverage the uncertainty metrics to avoid safety-critical areas by incorporating safety constraints with spatial relaxation. Our method outperforms state-of-the-art baselines in terms of both safety and adherence to social norms by a large margin and demonstrates much stronger robustness to out-of-distribution scenarios. Our code and video demos are available on our project website: https://sonic-social-nav.github.io/.
Abstract:Magnetic soft robots have attracted growing interest due to their unique advantages in terms of untethered actuation and excellent controllability. However, finding the required magnetization patterns or magnetic fields to achieve the desired functions of these robots is quite challenging in many cases. No unified framework for design has been proposed yet, and existing methods mainly rely on manual heuristics, which are hard to satisfy the high complexity level of the desired robotic motion. Here, we develop an intelligent method to solve the related inverse-design problems, implemented by introducing a novel simulation platform for magnetic soft robots based on Cosserat rod models and a deep reinforcement learning framework based on TD3. We demonstrate that magnetic soft robots with different magnetization patterns can learn to move without human guidance in simulations, and effective magnetic fields can be autonomously generated that can then be applied directly to real magnetic soft robots in an open-loop way.