Dept. of Informatics, Bioengineering, Robotics, and Systems Engineering, University of Genoa, Genoa, Italy
Abstract:We propose a dataset to study the influence of object-specific characteristics on human pick-and-place movements and compare the quality of the motion kinematics extracted by various sensors. This dataset is also suitable for promoting a broader discussion on general learning problems in the hand-object interaction domain, such as intention recognition or motion generation with applications in the Robotics field. The dataset consists of the recordings of 15 subjects performing 80 repetitions of a pick-and-place action under various experimental conditions, for a total of 1200 pick-and-places. The data has been collected thanks to a multimodal setup composed of multiple cameras, observing the actions from different perspectives, a motion capture system, and a wrist-worn inertial measurement unit. All the objects manipulated in the experiments are identical in shape, size, and appearance but differ in weight and liquid filling, which influences the carefulness required for their handling.
Abstract:Dexterous in-hand manipulation is a unique and valuable human skill requiring sophisticated sensorimotor interaction with the environment while respecting stability constraints. Satisfying these constraints with generated motions is essential for a robotic platform to achieve reliable in-hand manipulation skills. Explicitly modelling these constraints can be challenging, but they can be implicitly modelled and learned through experience or human demonstrations. We propose a learning and control approach based on dictionaries of motion primitives generated from human demonstrations. To achieve this, we defined an optimization process that combines motion primitives to generate robot fingertip trajectories for moving an object from an initial to a desired final pose. Based on our experiments, our approach allows a robotic hand to handle objects like humans, adhering to stability constraints without requiring explicit formalization. In other words, the proposed motion primitive dictionaries learn and implicitly embed the constraints crucial to the in-hand manipulation task.
Abstract:This paper presents a comprehensive framework to enhance Human-Robot Collaboration (HRC) in real-world scenarios. It introduces a formalism to model articulated tasks, requiring cooperation between two agents, through a smaller set of primitives. Our implementation leverages Hierarchical Task Networks (HTN) planning and a modular multisensory perception pipeline, which includes vision, human activity recognition, and tactile sensing. To showcase the system's scalability, we present an experimental scenario where two humans alternate in collaborating with a Baxter robot to assemble four pieces of furniture with variable components. This integration highlights promising advancements in HRC, suggesting a scalable approach for complex, cooperative tasks across diverse applications.
Abstract:Hands are a fundamental tool humans use to interact with the environment and objects. Through hand motions, we can obtain information about the shape and materials of the surfaces we touch, modify our surroundings by interacting with objects, manipulate objects and tools, or communicate with other people by leveraging the power of gestures. For these reasons, sensorized gloves, which can collect information about hand motions and interactions, have been of interest since the 1980s in various fields, such as Human-Machine Interaction (HMI) and the analysis and control of human motions. Over the last 40 years, research in this field explored different technological approaches and contributed to the popularity of wearable custom and commercial products targeting hand sensorization. Despite a positive research trend, these instruments are not widespread yet outside research environments and devices aimed at research are often ad hoc solutions with a low chance of being reused. This paper aims to provide a systematic literature review for custom gloves to analyze their main characteristics and critical issues, from the type and number of sensors to the limitations due to device encumbrance. The collection of this information lays the foundation for a standardization process necessary for future breakthroughs in this research field.
Abstract:This work aims to tackle the intent recognition problem in Human-Robot Collaborative assembly scenarios. Precisely, we consider an interactive assembly of a wooden stool where the robot fetches the pieces in the correct order and the human builds the parts following the instruction manual. The intent recognition is limited to the idle state estimation and it is needed to ensure a better synchronization between the two agents. We carried out a comparison between two distinct solutions involving wearable sensors and eye tracking integrated into the perception pipeline of a flexible planning architecture based on Hierarchical Task Networks. At runtime, the wearable sensing module exploits the raw measurements from four 9-axis Inertial Measurement Units positioned on the wrists and hands of the user as an input for a Long Short-Term Memory Network. On the other hand, the eye tracking relies on a Head Mounted Display and Unreal Engine. We tested the effectiveness of the two approaches with 10 participants, each of whom explored both options in alternate order. We collected explicit metrics about the attractiveness and efficiency of the two techniques through User Experience Questionnaires as well as implicit criteria regarding the classification time and the overall assembly time. The results of our work show that the two methods can reach comparable performances both in terms of effectiveness and user preference. Future development could aim at joining the two approaches two allow the recognition of more complex activities and to anticipate the user actions.
Abstract:As collaborative robot (Cobot) adoption in many sectors grows, so does the interest in integrating digital twins in human-robot collaboration (HRC). Virtual representations of physical systems (PT) and assets, known as digital twins, can revolutionize human-robot collaboration by enabling real-time simulation, monitoring, and control. In this article, we present a review of the state-of-the-art and our perspective on the future of digital twins (DT) in human-robot collaboration. We argue that DT will be crucial in increasing the efficiency and effectiveness of these systems by presenting compelling evidence and a concise vision of the future of DT in human-robot collaboration, as well as insights into the possible advantages and challenges associated with their integration.
Abstract:Implicit communication plays such a crucial role during social exchanges that it must be considered for a good experience in human-robot interaction. This work addresses implicit communication associated with the detection of physical properties, transport, and manipulation of objects. We propose an ecological approach to infer object characteristics from subtle modulations of the natural kinematics occurring during human object manipulation. Similarly, we take inspiration from human strategies to shape robot movements to be communicative of the object properties while pursuing the action goals. In a realistic HRI scenario, participants handed over cups - filled with water or empty - to a robotic manipulator that sorted them. We implemented an online classifier to differentiate careful/not careful human movements, associated with the cups' content. We compared our proposed "expressive" controller, which modulates the movements according to the cup filling, against a neutral motion controller. Results show that human kinematics is adjusted during the task, as a function of the cup content, even in reach-to-grasp motion. Moreover, the carefulness during the handover of full cups can be reliably inferred online, well before action completion. Finally, although questionnaires did not reveal explicit preferences from participants, the expressive robot condition improved task efficiency.
Abstract:This article presents an open-source architecture for conveying robots' intentions to human teammates using Mixed Reality and Head-Mounted Displays. The architecture has been developed focusing on its modularity and re-usability aspects. Both binaries and source code are available, enabling researchers and companies to adopt the proposed architecture as a standalone solution or to integrate it in more comprehensive implementations. Due to its scalability, the proposed architecture can be easily employed to develop shared Mixed Reality experiences involving multiple robots and human teammates in complex collaborative scenarios.
Abstract:By incorporating ergonomics principles into the task allocation processes, human-robot collaboration (HRC) frameworks can favour the prevention of work-related musculoskeletal disorders (WMSDs). In this context, existing offline methodologies do not account for the variability of human actions and states; therefore, planning and dynamically assigning roles in human-robot teams remains an unaddressed challenge.This study aims to create an ergonomic role allocation framework that optimises the HRC, taking into account task features and human state measurements. The presented framework consists of two main modules: the first provides the HRC task model, exploiting AND/OR Graphs (AOG)s, which we adapted to solve the allocation problem; the second module describes the ergonomic risk assessment during task execution through a risk indicator and updates the AOG-related variables to influence future task allocation. The proposed framework can be combined with any time-varying ergonomic risk indicator that evaluates human cognitive and physical burden. In this work, we tested our framework in an assembly scenario, introducing a risk index named Kinematic Wear.The overall framework has been tested with a multi-subject experiment. The task allocation results and subjective evaluations, measured with questionnaires, show that high-risk actions are correctly recognised and not assigned to humans, reducing fatigue and frustration in collaborative tasks.
Abstract:As humans, we have a remarkable capacity for reading the characteristics of objects only by observing how another person carries them. Indeed, how we perform our actions naturally embeds information on the item features. Collaborative robots can achieve the same ability by modulating the strategy used to transport objects with their end-effector. A contribution in this sense would promote spontaneous interactions by making an implicit yet effective communication channel available. This work investigates if humans correctly perceive the implicit information shared by a robotic manipulator through its movements during a dyadic collaboration task. Exploiting a generative approach, we designed robot actions to convey virtual properties of the transported objects, particularly to inform the partner if any caution is required to handle the carried item. We found that carefulness is correctly interpreted when observed through the robot movements. In the experiment, we used identical empty plastic cups; nevertheless, participants approached them differently depending on the attitude shown by the robot: humans change how they reach for the object, being more careful whenever the robot does the same. This emerging form of motor contagion is entirely spontaneous and happens even if the task does not require it.