Abstract:For the best human-robot interaction experience, the robot's navigation policy should take into account personal preferences of the user. In this paper, we present a learning framework complemented by a perception pipeline to train a depth vision-based, personalized navigation controller from user demonstrations. Our refined virtual reality interface enables the demonstration of robot navigation trajectories under motion of the user for dynamic interaction scenarios. In a detailed analysis, we evaluate different configurations of the perception pipeline. As the experiments demonstrate, our new pipeline compresses the perceived depth images to a latent state representation and, thus, enables efficient reasoning about the robot's dynamic environment to the learning. We discuss the robot's navigation performance in various virtual scenes by enrolling a variational autoencoder in combination with a motion predictor and demonstrate the first personalized robot navigation controller that solely relies on depth images.
Abstract:For the most comfortable, human-aware robot navigation, subjective user preferences need to be taken into account. This paper presents a novel reinforcement learning framework to train a personalized navigation controller along with an intuitive virtual reality demonstration interface. The conducted user study provides evidence that our personalized approach significantly outperforms classical approaches with more comfortable human-robot experiences. We achieve these results using only a few demonstration trajectories from non-expert users, who predominantly appreciate the intuitive demonstration setup. As we show in the experiments, the learned controller generalizes well to states not covered in the demonstration data, while still reflecting user preferences during navigation. Finally, we transfer the navigation controller without loss in performance to a real robot.