Abstract:Re-grasp manipulation leverages on ergonomic tools to assist humans in accomplishing diverse tasks. In certain scenarios, humans often employ external forces to effortlessly and precisely re-grasp tools like a hammer. Previous development on controllers for in-grasp sliding motion using passive dynamic actions (e.g.,gravity) relies on apprehension of finger-object contact information, and requires customized design for individual objects with varied geometry and weight distribution. It limits their adaptability to diverse objects. In this paper, we propose an end-to-end sliding motion controller based on imitation learning (IL) that necessitates minimal prior knowledge of object mechanics, relying solely on object position information. To expedite training convergence, we utilize a data glove to collect expert data trajectories and train the policy through Generative Adversarial Imitation Learning (GAIL). Simulation results demonstrate the controller's versatility in performing in-hand sliding tasks with objects of varying friction coefficients, geometric shapes, and masses. By migrating to a physical system using visual position estimation, the controller demonstrated an average success rate of 86%, surpassing the baseline algorithm's success rate of 35% of Behavior Cloning(BC) and 20% of Proximal Policy Optimization (PPO).
Abstract:Dexterous manipulation through imitation learning has gained significant attention in robotics research. The collection of high-quality expert data holds paramount importance when using imitation learning. The existing approaches for acquiring expert data commonly involve utilizing a data glove to capture hand motion information. However, this method suffers from limitations as the collected information cannot be directly mapped to the robotic hand due to discrepancies in their degrees of freedom or structures. Furthermore,it fails to accurately capture force feedback information between the hand and objects during the demonstration process. To overcome these challenges, this paper presents a novel solution in the form of a wearable dexterous hand, namely Hand-over-hand Imitation learning wearable RObotic Hand (HIRO Hand),which integrates expert data collection and enables the implementation of dexterous operations. This HIRO Hand empowers the operator to utilize their own tactile feedback to determine appropriate force, position, and actions, resulting in more accurate imitation of the expert's actions. We develop both non-learning and visual behavior cloning based controllers allowing HIRO Hand successfully achieves grasping and in-hand manipulation ability.