Abstract:The purpose of this study is to construct a contact point estimation system for the both side of a finger, and to realize a motion of bending the finger after inserting the finger into a tool (hereinafter referred to as the bending after insertion motion). In order to know the contact points of the full finger including the joints, we propose to fabricate a nerve inclusion flexible epidermis by combining a flexible epidermis and a nerve line made of conductive filaments, and estimate the contact position from the change of resistance of the nerve line. A nerve inclusion flexible epidermis attached to a thin fingered robotic hand was combined with a twin-armed robot and tool use experiments were conducted. The contact information can be used for tool use, confirming the effectiveness of the proposed method.
Abstract:In order to make robots more useful in a variety of environments, they need to be highly portable so that they can be transported to wherever they are needed, and highly storable so that they can be stored when not in use. We propose "on-site robotics", which uses parts procured at the location where the robot will be active, and propose a new solution to the problem of portability and storability. In this paper, as a proof of concept for on-site robotics, we describe a method for estimating the kinematic model of a robot by using inertial measurement units (IMU) sensor module on rigid links, estimating the relative orientation between modules from angular velocity, and estimating the relative position from the measurement of centrifugal force. At the end of this paper, as an evaluation for this method, we present an experiment in which a robot made up of wooden sticks reaches a target position. In this experiment, even if the combination of the links is changed, the robot is able to reach the target position again immediately after estimation, showing that it can operate even after being reassembled. Our implementation is available on https://github.com/hiroya1224/urdf_estimation_with_imus .
Abstract:It is important for daily life support robots to detect changes in their environment and perform tasks. In the field of anomaly detection in computer vision, probabilistic and deep learning methods have been used to calculate the image distance. These methods calculate distances by focusing on image pixels. In contrast, this study aims to detect semantic changes in the daily life environment using the current development of large-scale vision-language models. Using its Visual Question Answering (VQA) model, we propose a method to detect semantic changes by applying multiple questions to a reference image and a current image and obtaining answers in the form of sentences. Unlike deep learning-based methods in anomaly detection, this method does not require any training or fine-tuning, is not affected by noise, and is sensitive to semantic state changes in the real world. In our experiments, we demonstrated the effectiveness of this method by applying it to a patrol task in a real-life environment using a mobile robot, Fetch Mobile Manipulator. In the future, it may be possible to add explanatory power to changes in the daily life environment through spoken language.