Abstract:This position paper explores the integration of Artificial Intelligence (AI) into force-controlled robotic tasks within the scope of advanced manufacturing, a cornerstone of Industry 4.0. AI's role in enhancing robotic manipulators - key drivers in the Fourth Industrial Revolution - is rapidly leading to significant innovations in smart manufacturing. The objective of this article is to frame these innovations in practical force-controlled applications - e.g. deburring, polishing, and assembly tasks like peg-in-hole (PiH) - highlighting their necessity for maintaining high-quality production standards. By reporting on recent AI-based methodologies, this article contrasts them and identifies current challenges to be addressed in future research. The analysis concludes with a perspective on future research directions, emphasizing the need for common performance metrics to validate AI techniques, integration of various enhancements for performance optimization, and the importance of validating them in relevant scenarios. These future directions aim to provide consistency with already adopted approaches, so as to be compatible with manufacturing standards, increasing the relevance of AI-driven methods in both academic and industrial contexts.
Abstract:As human space exploration evolves toward longer voyages farther from our home planet, in-situ resource utilization (ISRU) becomes increasingly important. Haptic teleoperations are one of the technologies by which such activities can be carried out remotely by humans, whose expertise is still necessary for complex activities. In order to perform precision tasks with effectiveness, the operator must experience ease of use and accuracy. The same features are demanded to reduce the complexity of the training procedures and the associated learning time for operators without a specific background in robotic teleoperations. Haptic teleoperation systems, that allow for a natural feeling of forces, need to cope with the trade-off between accurate movements and workspace extension. Clearly, both of them are required for typical ISRU tasks. In this work, we develop a new concept of operations and suitable human-robot interfaces to achieve sample collection and assembly with ease of use and accuracy. In the proposed operational concept, the teleoperation space is extended by executing automated trajectories, offline planned at the control station. In three different experimental scenarios, we validate the end-to-end system involving the control station and the robotic asset, by assessing the contribution of haptics to mission success, the system robustness to consistent delays, and the ease of training new operators.
Abstract:In recent years, deep reinforcement learning (RL) has shown its effectiveness in solving complex continuous control tasks like locomotion and dexterous manipulation. However, this comes at the cost of an enormous amount of experience required for training, exacerbated by the sensitivity of learning efficiency and the policy performance to hyperparameter selection, which often requires numerous trials of time-consuming experiments. This work introduces a Population-Based Reinforcement Learning (PBRL) approach that exploits a GPU-accelerated physics simulator to enhance the exploration capabilities of RL by concurrently training multiple policies in parallel. The PBRL framework is applied to three state-of-the-art RL algorithms -- PPO, SAC, and DDPG -- dynamically adjusting hyperparameters based on the performance of learning agents. The experiments are performed on four challenging tasks in Isaac Gym -- Anymal Terrain, Shadow Hand, Humanoid, Franka Nut Pick -- by analyzing the effect of population size and mutation mechanisms for hyperparameters. The results show that PBRL agents achieve superior performance, in terms of cumulative reward, compared to non-evolutionary baseline agents. The trained agents are finally deployed in the real world for a Franka Nut Pick task, demonstrating successful sim-to-real transfer. Code and videos of the learned policies are available on our project website.
Abstract:Optimal motion planning along prescribed paths can be solved with several techniques, but most of them do not take into account the wrenches exerted by the end-effector when in contact with the environment. When a dynamic model of the environment is not available, no consolidated methodology exists to consider the effect of the interaction. Regardless of the specific performance index to optimize, this article proposes a strategy to include external wrenches in the optimal planning algorithm, considering the task specifications. This procedure is instantiated for minimum-time trajectories and validated on a real robot performing an interaction task under admittance control. The results prove that the inclusion of end-effector wrenches affect the planned trajectory, in fact modifying the manipulator's dynamic capability.