Abstract:Large Language Models (LLMs) present a promising frontier in robotic task planning by leveraging extensive human knowledge. Nevertheless, the current literature often overlooks the critical aspects of adaptability and error correction within robotic systems. This work aims to overcome this limitation by enabling robots to modify their motion strategies and select the most suitable task plans based on the context. We introduce a novel framework termed action contextualization, aimed at tailoring robot actions to the precise requirements of specific tasks, thereby enhancing adaptability through applying LLM-derived contextual insights. Our proposed motion metrics guarantee the feasibility and efficiency of adjusted motions, which evaluate robot performance and eliminate planning redundancies. Moreover, our framework supports online feedback between the robot and the LLM, enabling immediate modifications to the task plans and corrections of errors. Our framework has achieved an overall success rate of 81.25% through extensive validation. Finally, integrated with dynamic system (DS)-based robot controllers, the robotic arm-hand system demonstrates its proficiency in autonomously executing LLM-generated motion plans for sequential table-clearing tasks, rectifying errors without human intervention, and completing tasks, showcasing robustness against external disturbances. Our proposed framework features the potential to be integrated with modular control approaches, significantly enhancing robots' adaptability and autonomy in sequential task execution.
Abstract:This article proposes a novel methodology to learn a stable robot control law driven by dynamical systems. The methodology requires a single demonstration and can deduce a stable dynamics in arbitrary high dimensions. The method relies on the idea that there exists a latent space in which the nonlinear dynamics appears quasi linear. The original nonlinear dynamics is mapped into a stable linear DS, by leveraging on the properties of graph embeddings. We show that the eigendecomposition of the Graph Laplacian results in linear embeddings in two dimensions and quasi-linear in higher dimensions. The nonlinear terms vanish, exponentially as the number of datapoints increase, and for large density of points, the embedding appears linear. We show that this new embedding enables to model highly nonlinear dynamics in high dimension and overcomes alternative techniques in both precision of reconstruction and number of parameters required for the embedding. We demonstrate its applicability to control real robot tasked to perform complex free motion in space.