Abstract:This paper presents a novel robotic arm system, named PAPRAS (Plug-And-Play Robotic Arm System). PAPRAS consists of a portable robotic arm(s), docking mount(s), and software architecture including a control system. By analyzing the target task spaces at home, the dimensions and configuration of PAPRAS are determined. PAPRAS's arm is light (less than 6kg) with an optimized 3D-printed structure, and it has a high payload (3kg) as a human-arm-sized manipulator. A locking mechanism is embedded in the structure for better portability and the 3D-printed docking mount can be installed easily. PAPRAS's software architecture is developed on an open-source framework and optimized for low-latency multiagent-based distributed manipulator control. A process to create new demonstrations is presented to show PAPRAS's ease of use and efficiency. In the paper, simulations and hardware experiments are presented in various demonstrations, including sink-to-dishwasher manipulation, coffee making, mobile manipulation on a quadruped, and suit-up demo to validate the hardware and software design.
Abstract:In this paper, we present self-supervised shared latent embedding (S3LE), a data-driven motion retargeting method that enables the generation of natural motions in humanoid robots from motion capture data or RGB videos. While it requires paired data consisting of human poses and their corresponding robot configurations, it significantly alleviates the necessity of time-consuming data-collection via novel paired data generating processes. Our self-supervised learning procedure consists of two steps: automatically generating paired data to bootstrap the motion retargeting, and learning a projection-invariant mapping to handle the different expressivity of humans and humanoid robots. Furthermore, our method guarantees that the generated robot pose is collision-free and satisfies position limits by utilizing nonparametric regression in the shared latent space. We demonstrate that our method can generate expressive robotic motions from both the CMU motion capture database and YouTube videos.