Abstract:In the quest for electrically-driven soft actuators, the focus has shifted away from liquid-gas phase transition, commonly associated with reduced strain rates and actuation delays, in favour of electrostatic and other electrothermal actuation methods. This prevented the technology from capitalizing on its unique characteristics, particularly: low voltage operation, controllability, scalability, and ease of integration into robots. Here, we introduce a phase transition electric soft actuator capable of strain rates of over 16%/s and pressurization rates of 100 kPa/s, approximately one order of magnitude higher than previous attempts. Blocked forces exceeding 50 N were achieved while operating at voltages up to 24 V. We propose a method for selecting working fluids which allows for application-specific optimization, together with a nonlinear control approach that reduces both parasitic vibrations and control lag. We demonstrate the integration of this technology in soft robotic systems, including the first quadruped robot powered by liquid-gas phase transition.
Abstract:Bio-inspired soft robots have already shown the ability to handle uncertainty and adapt to unstructured environments. However, their availability is partially restricted by time-consuming, costly and highly supervised design-fabrication processes, often based on resource intensive iterative workflows. Here, we propose an integrated approach targeting the design and fabrication of pneumatic soft actuators in a single casting step. Molds and sacrificial water-soluble hollow cores are printed using fused filament fabrication (FFF). A heated water circuit accelerates the dissolution of the core's material and guarantees its complete removal from the actuator walls, while the actuator's mechanical operability is defined through finite element analysis (FEA). This enables the fabrication of actuators with non-uniform cross sections under minimal supervision, thereby reducing the number of iterations necessary during the design and fabrication processes. Three actuators capable of bending and linear motion were designed, fabricated, integrated and demonstrated as three different bio-inspired soft robots, an earthworm-inspired robot, a four-legged robot, and a robotic gripper. We demonstrate the availability, versatility and effectiveness of the proposed methods, contributing to accelerating the design and fabrication of soft robots. This study represents a step toward increasing the accessibility of soft robots to people at a lower cost.
Abstract:Real-time robot actuation is one of the main challenges to overcome in human-robot interaction. Most visual sensors are either too slow or their data are too complex to provide meaningful information and low latency input to a robotic system. Data output of an event camera is high-frequency and extremely lightweight, with only 8 bytes per event. To evaluate the hypothesis of using event cameras as data source for a real-time robotic system, the position of a waving hand is acquired from the event data and transmitted to a collaborative robot as a movement command. A total time delay of 110 ms was measured between the original movement and the robot movement, where much of the delay is caused by the robot dynamics.
Abstract:The featured dataset, the Event-based Dataset of Assembly Tasks (EDAT24), showcases a selection of manufacturing primitive tasks (idle, pick, place, and screw), which are basic actions performed by human operators in any manufacturing assembly. The data were captured using a DAVIS240C event camera, an asynchronous vision sensor that registers events when changes in light intensity value occur. Events are a lightweight data format for conveying visual information and are well-suited for real-time detection and analysis of human motion. Each manufacturing primitive has 100 recorded samples of DAVIS240C data, including events and greyscale frames, for a total of 400 samples. In the dataset, the user interacts with objects from the open-source CT-Benchmark in front of the static DAVIS event camera. All data are made available in raw form (.aedat) and in pre-processed form (.npy). Custom-built Python code is made available together with the dataset to aid researchers to add new manufacturing primitives or extend the dataset with more samples.
Abstract:The disposal and recycling of electronic waste (e-waste) is a global challenge. The disassembly of components is a crucial step towards an efficient recycling process, avoiding the destructive methods. Although most disassembly work is still done manually due to the diversity and complexity of components, there is a growing interest in developing automated methods to improve efficiency and reduce labor costs. This study aims to robotize the desoldering process and extracting components from printed circuit boards (PCBs), with the goal of automating the process as much as possible. The proposed strategy consists of several phases, including the controlled contact of the robotic tool with the PCB components. A specific tool was developed to apply a controlled force against the PCB component, removing it from the board. The results demonstrate that it is feasible to remove the PCB components with a high success rate (approximately 100% for the bigger PCB components).
Abstract:An effective human-robot collaborative process results in the reduction of the operator's workload, promoting a more efficient, productive, safer and less error-prone working environment. However, the implementation of collaborative robots in industry is still challenging. In this work, we compare manual and robot-assisted assembly processes to evaluate the effectiveness of collaborative robots while featuring different modes of operation (coexistence, cooperation and collaboration). Results indicate an improvement in ergonomic conditions and ease of execution without substantially compromising assembly time. Furthermore, the robot is intuitive to use and guides the user on the proper sequencing of the process.
Abstract:Manufacturing assembly tasks can vary in complexity and level of automation. Yet, achieving full automation can be challenging and inefficient, particularly due to the complexity of certain assembly operations. Human-robot collaborative work, leveraging the strengths of human labor alongside the capabilities of robots, can be a solution for enhancing efficiency. This paper introduces the CT benchmark, a benchmark and model set designed to facilitate the testing and evaluation of human-robot collaborative assembly scenarios. It was designed to compare manual and automatic processes using metrics such as the assembly time and human workload. The components of the model set can be assembled through the most common assembly tasks, each with varying levels of difficulty. The CT benchmark was designed with a focus on its applicability in human-robot collaborative environments, with the aim of ensuring the reproducibility and replicability of experiments. Experiments were carried out to assess assembly performance in three different setups (manual, automatic and collaborative), measuring metrics related to the assembly time and the workload on human operators. The results suggest that the collaborative approach takes longer than the fully manual assembly, with an increase of 70.8%. However, users reported a lower overall workload, as well as reduced mental demand, physical demand, and effort according to the NASA-TLX questionnaire.
Abstract:This study evaluates the application of a discrete action space reinforcement learning method (Q-learning) to the continuous control problem of robot inverted pendulum balancing. To speed up the learning process and to overcome technical difficulties related to the direct learning on the real robotic system, the learning phase is performed in simulation environment. A mathematical model of the system dynamics is implemented, deduced by curve fitting on data acquired from the real system. The proposed approach demonstrated feasible, featuring its application on a real world robot that learned to balance an inverted pendulum. This study also reinforces and demonstrates the importance of an accurate representation of the physical world in simulation to achieve a more efficient implementation of reinforcement learning algorithms in real world, even when using a discrete action space algorithm to control a continuous action.
Abstract:Machines that mimic humans have inspired scientists for centuries. Bio-inspired soft robotic hands are a good example of such an endeavor, featuring intrinsic material compliance and continuous motion to deal with uncertainty and adapt to unstructured environments. Recent research led to impactful achievements in functional designs, modeling, fabrication, and control of soft robots. Nevertheless, the full realization of life-like movements is still challenging to achieve, often based on trial-and-error considerations from design to fabrication, consuming time and resources. In this study, a soft robotic hand is proposed, composed of soft actuator cores and an exoskeleton, featuring a multi-material design aided by finite element analysis (FEA) to define the hand geometry and promote finger's bendability. The actuators are fabricated using molding and the exoskeleton is 3D-printed in a single step. An ON-OFF controller keeps the set fingers' inner pressures related to specific bending angles, even in the presence of leaks. The FEA numerical results were validated by experimental tests, as well as the ability of the hand to grasp objects with different shapes, weights and sizes. This integrated solution will make soft robotic hands more available to people, at a reduced cost, avoiding the time-consuming design-fabrication trial-and-error processes.
Abstract:Robots are increasingly present in our lives, sharing the workspace and tasks with human co-workers. However, existing interfaces for human-robot interaction / cooperation (HRI/C) have limited levels of intuitiveness to use and safety is a major concern when humans and robots share the same workspace. Many times, this is due to the lack of a reliable estimation of the human pose in space which is the primary input to calculate the human-robot minimum distance (required for safety and collision avoidance) and HRI/C featuring machine learning algorithms classifying human behaviours / gestures. Each sensor type has its own characteristics resulting in problems such as occlusions (vision) and drift (inertial) when used in an isolated fashion. In this paper, it is proposed a combined system that merges the human tracking provided by a 3D vision sensor with the pose estimation provided by a set of inertial measurement units (IMUs) placed in human body limbs. The IMUs compensate the gaps in occluded areas to have tracking continuity. To mitigate the lingering effects of the IMU offset we propose a continuous online calculation of the offset value. Experimental tests were designed to simulate human motion in a human-robot collaborative environment where the robot moves away to avoid unexpected collisions with de human. Results indicate that our approach is able to capture the human\textsc's position, for example the forearm, with a precision in the millimetre range and robustness to occlusions.