Abstract:Simulating tactile perception could potentially leverage the learning capabilities of robotic systems in manipulation tasks. However, the reality gap of simulators for high-resolution tactile sensors remains large. Models trained on simulated data often fail in zero-shot inference and require fine-tuning with real data. In addition, work on high-resolution sensors commonly focus on ones with flat surfaces while 3D round sensors are essential for dexterous manipulation. In this paper, we propose a bi-directional Generative Adversarial Network (GAN) termed SightGAN. SightGAN relies on the early CycleGAN while including two additional loss components aimed to accurately reconstruct background and contact patterns including small contact traces. The proposed SightGAN learns real-to-sim and sim-to-real processes over difference images. It is shown to generate real-like synthetic images while maintaining accurate contact positioning. The generated images can be used to train zero-shot models for newly fabricated sensors. Consequently, the resulted sim-to-real generator could be built on top of the tactile simulator to provide a real-world framework. Potentially, the framework can be used to train, for instance, reinforcement learning policies of manipulation tasks. The proposed model is verified in extensive experiments with test data collected from real sensors and also shown to maintain embedded force information within the tactile images.
Abstract:Teleoperation enables a user to perform tasks from a remote location. Hence, the user can interact with a long-distance environment through the operation of a robotic system. Often, teleoperation is required in order to perform dangerous tasks (e.g., work in disaster zones or in chemical plants) while keeping the user out of harm's way. Nevertheless, common approaches often provide cumbersome and unnatural usage. In this letter, we propose TeleFMG, an approach for teleoperation of a multi-finger robotic hand through natural motions of the user's hand. By using a low-cost wearable Force-Myography (FMG) device, musculoskeletal activities on the user's forearm are mapped to hand poses which, in turn, are mimicked by a robotic hand. The mapping is performed by a data-based model that considers spatial positions of the sensors on the forearm along with temporal dependencies of the FMG signals. A set of experiments show the ability of a teleoperator to control a multi-finger hand through intuitive and natural finger motion. Furthermore, transfer to new users is demonstrated.