Abstract:Skill transfer from humans to robots is challenging. Presently, many researchers focus on capturing only position or joint angle data from humans to teach the robots. Even though this approach has yielded impressive results for grasping applications, reconstructing motion for object handling or fine manipulation from a human hand to a robot hand has been sparsely explored. Humans use tactile feedback to adjust their motion to various objects, but capturing and reproducing the applied forces is an open research question. In this paper we introduce a wearable fingertip tactile sensor, which captures the distributed 3-axis force vectors on the fingertip. The fingertip tactile sensor is interchangeable between the human hand and the robot hand, meaning that it can also be assembled to fit on a robot hand such as the Allegro hand. This paper presents the structural aspects of the sensor as well as the methodology and approach used to design, manufacture, and calibrate the sensor. The sensor is able to measure forces accurately with a mean absolute error of 0.21, 0.16, and 0.44 Newtons in X, Y, and Z directions, respectively.
Abstract:Multi-fingered hands could be used to achieve many dexterous manipulation tasks, similarly to humans, and tactile sensing could enhance the manipulation stability for a variety of objects. However, tactile sensors on multi-fingered hands have a variety of sizes and shapes. Convolutional neural networks (CNN) can be useful for processing tactile information, but the information from multi-fingered hands needs an arbitrary pre-processing, as CNNs require a rectangularly shaped input, which may lead to unstable results. Therefore, how to process such complex shaped tactile information and utilize it for achieving manipulation skills is still an open issue. This paper presents a control method based on a graph convolutional network (GCN) which extracts geodesical features from the tactile data with complicated sensor alignments. Moreover, object property labels are provided to the GCN to adjust in-hand manipulation motions. Distributed tri-axial tactile sensors are mounted on the fingertips, finger phalanges and palm of an Allegro hand, resulting in 1152 tactile measurements. Training data is collected with a data-glove to transfer human dexterous manipulation directly to the robot hand. The GCN achieved high success rates for in-hand manipulation. We also confirmed that fragile objects were deformed less when correct object labels were provided to the GCN. When visualizing the activation of the GCN with a PCA, we verified that the network acquired geodesical features. Our method achieved stable manipulation even when an experimenter pulled a grasped object and for untrained objects.
Abstract:Selection of appropriate tools and use of them when performing daily tasks is a critical function for introducing robots for domestic applications. In previous studies, however, adaptability to target objects was limited, making it difficult to accordingly change tools and adjust actions. To manipulate various objects with tools, robots must both understand tool functions and recognize object characteristics to discern a tool-object-action relation. We focus on active perception using multimodal sensorimotor data while a robot interacts with objects, and allow the robot to recognize their extrinsic and intrinsic characteristics. We construct a deep neural networks (DNN) model that learns to recognize object characteristics, acquires tool-object-action relations, and generates motions for tool selection and handling. As an example tool-use situation, the robot performs an ingredients transfer task, using a turner or ladle to transfer an ingredient from a pot to a bowl. The results confirm that the robot recognizes object characteristics and servings even when the target ingredients are unknown. We also examine the contributions of images, force, and tactile data and show that learning a variety of multimodal information results in rich perception for tool use.