Abstract:This paper presents a bimanual haptic display based on collaborative robot arms. We address the limitations of existing robot arm-based haptic displays by optimizing the setup configuration and implementing inertia/friction compensation techniques. The optimized setup configuration maximizes workspace coverage, dexterity, and haptic feedback capability while ensuring collision safety. Inertia/friction compensation significantly improve transparency and reduce user fatigue, leading to a more seamless and transparent interaction. The effectiveness of our system is demonstrated in various applications, including bimanual bilateral teleoperation in both real and simulated environments. This research contributes to the advancement of haptic technology by presenting a practical and effective solution for creating high-performance bimanual haptic displays using collaborative robot arms.
Abstract:We present a lightweight and affordable motion capture method based on two smartwatches and a head-mounted camera. In contrast to the existing approaches that use six or more expert-level IMU devices, our approach is much more cost-effective and convenient. Our method can make wearable motion capture accessible to everyone everywhere, enabling 3D full-body motion capture in diverse environments. As a key idea to overcome the extreme sparsity and ambiguities of sensor inputs, we integrate 6D head poses obtained from the head-mounted cameras for motion estimation. To enable capture in expansive indoor and outdoor scenes, we propose an algorithm to track and update floor level changes to define head poses, coupled with a multi-stage Transformer-based regression module. We also introduce novel strategies leveraging visual cues of egocentric images to further enhance the motion capture quality while reducing ambiguities. We demonstrate the performance of our method on various challenging scenarios, including complex outdoor environments and everyday motions including object interactions and social interactions among multiple individuals.
Abstract:Synthesizing interaction-involved human motions has been challenging due to the high complexity of 3D environments and the diversity of possible human behaviors within. We present LAMA, Locomotion-Action-MAnipulation, to synthesize natural and plausible long term human movements in complex indoor environments. The key motivation of LAMA is to build a unified framework to encompass a series of motions commonly observable in our daily lives, including locomotion, interactions with 3D scenes, and manipulations of 3D objects. LAMA is based on a reinforcement learning framework coupled with a motion matching algorithm to synthesize locomotion and scene interaction seamlessly under common constraints and collision avoidance handling. LAMA also exploits a motion editing framework via manifold learning to cover possible variations in interaction and manipulation motions. We quantitatively and qualitatively demonstrate that LAMA outperforms existing approaches in various challenging scenarios. Project webpage: https://lama-www.github.io/ .