Abstract:Safe learning of locomotion skills is still an open problem. Indeed, the intrinsically unstable nature of the open-loop dynamics of locomotion systems renders naive learning from scratch prone to catastrophic failures in the real world. In this work, we investigate the use of iterative algorithms to safely learn locomotion skills from model predictive control (MPC). In our framework, we use MPC as an expert and take inspiration from the safe data aggregation (SafeDAGGER) framework to minimize the number of failures during training of the policy. Through a comparison with other standard approaches such as behavior cloning and vanilla DAGGER, we show that not only our approach has a substantially fewer number of failures during training, but the resulting policy is also more robust to external disturbances.