Abstract:For safe and effective operation of humanoid robots in human-populated environments, the problem of commanding a large number of Degrees of Freedom (DoF) while simultaneously considering dynamic obstacles and human proximity has still not been solved. We present a new reactive motion controller that commands two arms of a humanoid robot and three torso joints (17 DoF in total). We formulate a quadratic program that seeks joint velocity commands respecting multiple constraints while minimizing the magnitude of the velocities. We introduce a new unified treatment of obstacles that dynamically maps visual and proximity (pre-collision) and tactile (post-collision) obstacles as additional constraints to the motion controller, in a distributed fashion over surface of the upper-body of the iCub robot (with 2000 pressure-sensitive receptors). The bio-inspired controller: (i) produces human-like minimum jerk movement profiles; (ii) gives rise to a robot with whole-body visuo-tactile awareness, resembling peripersonal space representations. The controller was extensively experimentally validated, including a physical human-robot interaction scenario.
Abstract:Two regimes permitting safe physical human-robot interaction, speed and separation monitoring and safety-rated monitored stop, depend on reliable perception of the space surrounding the robot. This can be accomplished by visual sensors (like cameras, RGB-D cameras, LIDARs), proximity sensors, or dedicated devices used in industrial settings like pads that are activated by the presence of the operator. The deployment of a particular solution is often ad hoc and no unified representation of the interaction space or its coverage by the different sensors exists. In this work, we make first steps in this direction by defining the spaces to be monitored, representing all sensor data as information about occupancy and using occupancy-based metrics to calculate how a particular sensor covers the workspace. We demonstrate our approach in two (multi-)sensor-placement experiments in three static scenes and one experiment in a dynamic scene. The occupancy representation allow to compare the effectiveness of various sensor setups. Therefore, this approach can serve as a prototyping tool to establish the sensor setup that provides the most efficient coverage for the given metrics and sensor representations.
Abstract:We study the performance of state-of-the-art human keypoint detectors in the context of close proximity human-robot interaction. The detection in this scenario is specific in that only a subset of body parts such as hands and torso are in the field of view. In particular, (i) we survey existing datasets with human pose annotation from the perspective of close proximity images and prepare and make publicly available a new Human in Close Proximity (HiCP) dataset; (ii) we quantitatively and qualitatively compare state-of-the-art human whole-body 2D keypoint detection methods (OpenPose, MMPose, AlphaPose, Detectron2) on this dataset; (iii) since accurate detection of hands and fingers is critical in applications with handovers, we evaluate the performance of the MediaPipe hand detector; (iv) we deploy the algorithms on a humanoid robot with an RGB-D camera on its head and evaluate the performance in 3D human keypoint detection. A motion capture system is used as reference. The best performing whole-body keypoint detectors in close proximity were MMPose and AlphaPose, but both had difficulty with finger detection. Thus, we propose a combination of MMPose or AlphaPose for the body and MediaPipe for the hands in a single framework providing the most accurate and robust detection. We also analyse the failure modes of individual detectors -- for example, to what extent the absence of the head of the person in the image degrades performance. Finally, we demonstrate the framework in a scenario where a humanoid robot interacting with a person uses the detected 3D keypoints for whole-body avoidance maneuvers.
Abstract:Soft electronic skins are one of the means to turn an industrial manipulator into a collaborative robot. For manipulators that are already fit for physical human-robot collaboration, soft skins can make them safer. In this work, we study the after impact behavior of two collaborative manipulators (UR10e and KUKA LBR iiwa) and one classical industrial manipulator (KUKA Cybertech), in the presence or absence of an industrial protective skin (AIRSKIN). In addition, we isolate the effects of the passive padding and the active contribution of the sensor to robot reaction. We present a total of 2250 collision measurements and study the impact force, contact duration, clamping force, and impulse. The dataset is publicly available. We summarize our results as follows. For transient collisions, the passive skin properties lowered the impact forces by about 40 %. During quasi-static contact, the effect of skin covers -- active or passive -- cannot be isolated from the collision detection and reaction by the collaborative robots. Important effects of the stop categories triggered by the active protective skin were found. We systematically compare the different settings and the empirically established safe velocities with prescriptions by the ISO/TS 15066. In some cases, up to the quadruple of the ISO/TS 15066 prescribed velocity can comply with the impact force limits and thus be considered safe. We propose an extension of the formulas relating impact force and permissible velocity that take into account the stiffness and compressible thickness of the protective cover, leading to better predictions of the collision forces. At the same time, this work emphasizes the need for in situ measurements as all the factors we studied -- presence of active/passive skin, safety stop settings, robot collision reaction, impact direction, and, of course, velocity -- have effects on the force evolution after impact.
Abstract:We present a robot kinematic calibration method that combines complementary calibration approaches: self-contact, planar constraints, and self-observation. We analyze the estimation of the end effector parameters, joint offsets of the manipulators, calibration of the complete kinematic chain (DH parameters), and we compare our results with ground truth measurements provided by a laser tracker. Our main findings are: (1) When applying the complementary calibration approaches in isolation, the self-contact approach yields the best and most stable results. (2) All combinations of more than one approach were always superior to using any single approach in terms of calibration errors as well as the observability of the estimated parameters. Combining more approaches delivers robot parameters that better generalize to the parts of workspace not used for the calibration. (3) Sequential calibration, i.e.\ calibrating cameras first and then robot kinematics, is more effective than simultaneous calibration of all parameters. In real experiments, we employ two industrial manipulators mounted on a common base. The manipulators are equipped with force/torque sensors at their wrists, with two cameras attached to the robot base, and with special end effectors with fiducial markers. We collect a new comprehensive dataset for robot kinematic calibration and make it publicly available. The dataset and its analysis provide quantitative and qualitative insights that go beyond the specific manipulators used in this work and are applicable to self-contained robot kinematic calibration in general.
Abstract:Collaborative robots, i.e. robots designed for direct interaction with a human, present a promising step in robotic manufacturing. However, their performance is limited by the safety demands of standards. In this article, we measure the forces exerted by two robot arms (UR10e and Kuka LBR iiwa) on an impact measuring device in different positions in the robot workspace and with various velocities. Based on these measurements, we investigate the Power and Force Limiting regime presented in ISO/TS 15066. Impact forces are in practice hard to calculate analytically as many properties of the robots are not available (e.g., proprietary control algorithms). This motivates the use of simple, yet reasonably accurate, approximations. Our results show that height of the impact location is also an important factor and that an accurate model of the robot can be created from a limited number of impact samples. Previous work predicted impact forces based on other factors (distance, velocity, weight), yet these predictions are less accurate. This would allow a fast estimation of the impact forces in the robot's workspace and thus make it easier to design a safe human-robot collaboration setup.