Abstract:We present a nonlinear non-convex model predictive control approach to solving a real-world labyrinth game. We introduce adaptive nonlinear constraints, representing the non-convex obstacles within the labyrinth. Our method splits the computation-heavy optimization problem into two layers; first, a high-level model predictive controller which incorporates the full problem formulation and finds pseudo-global optimal trajectories at a low frequency. Secondly, a low-level model predictive controller that receives a reduced, computationally optimized version of the optimization problem to follow the given high-level path in real-time. Further, a map of the labyrinth surface irregularities is learned. Our controller is able to handle the major disturbances and model inaccuracies encountered on the labyrinth and outperforms other classical control methods.
Abstract:Distributed tactile sensing for multi-force detection is crucial for various aerial robot interaction tasks. However, current contact sensing solutions on drones only exploit single end-effector sensors and cannot provide distributed multi-contact sensing. Designed to be easily mounted at the bottom of a drone, we propose an optical tactile sensor that features a large and curved soft sensing surface, a hollow structure and a new illumination system. Even when spaced only 2 cm apart, multiple contacts can be detected simultaneously using our software pipeline, which provides real-world quantities of 3D contact locations (mm) and 3D force vectors (N), with an accuracy of 1.5 mm and 0.17 N respectively. We demonstrate the sensor's applicability and reliability onboard and in real-time with two demos related to i) the estimation of the compliance of different perches and subsequent re-alignment and landing on the stiffer one, and ii) the mapping of sparse obstacles. The implementation of our distributed tactile sensor represents a significant step towards attaining the full potential of drones as versatile robots capable of interacting with and navigating within complex environments.
Abstract:Motivated by the challenge of achieving rapid learning in physical environments, this paper presents the development and training of a robotic system designed to navigate and solve a labyrinth game using model-based reinforcement learning techniques. The method involves extracting low-dimensional observations from camera images, along with a cropped and rectified image patch centered on the current position within the labyrinth, providing valuable information about the labyrinth layout. The learning of a control policy is performed purely on the physical system using model-based reinforcement learning, where the progress along the labyrinth's path serves as a reward signal. Additionally, we exploit the system's inherent symmetries to augment the training data. Consequently, our approach learns to successfully solve a popular real-world labyrinth game in record time, with only 5 hours of real-world training data.
Abstract:Grasping objects whose physical properties are unknown is still a great challenge in robotics. Most solutions rely entirely on visual data to plan the best grasping strategy. However, to match human abilities and be able to reliably pick and hold unknown objects, the integration of an artificial sense of touch in robotic systems is pivotal. This paper describes a novel model-based slip detection pipeline that can predict possibly failing grasps in real-time and signal a necessary increase in grip force. As such, the slip detector does not rely on manually collected data, but exploits physics to generalize across different tasks. To evaluate the approach, a state-of-the-art vision-based tactile sensor that accurately estimates distributed forces was integrated into a grasping setup composed of a six degrees-of-freedom cobot and a two-finger gripper. Results show that the system can reliably predict slip while manipulating objects of different shapes, materials, and weights. The sensor can detect both translational and rotational slip in various scenarios, making it suitable to improve the stability of a grasp.
Abstract:This paper presents an offset-free model predictive controller for fast and accurate control of a spherical soft robotic arm. In this control scheme, a linear model is combined with an online disturbance estimation technique to systematically compensate model deviations. Dynamic effects such as material relaxation resulting from the use of soft materials can be addressed to achieve offset-free tracking. The tracking error can be reduced by 35% when compared to a standard model predictive controller without a disturbance compensation scheme. The improved tracking performance enables the realization of a ball catching application, where the spherical soft robotic arm can catch a ball thrown by a human.
Abstract:In this thesis we are interested in applying distributed estimation, control and optimization techniques to enable a group of quadcopters to fly through openings. The quadcopters are assumed to be equipped with a simulated bearing and distance sensor for localization. Some quadcopters are designated as leaders who carry global position sensors. We assume quadcopters can communicate information with each other.
Abstract:This paper aims to show that robots equipped with a vision-based tactile sensor can perform dynamic manipulation tasks without prior knowledge of all the physical attributes of the objects to be manipulated. For this purpose, a robotic system is presented that is able to swing up poles of different masses, radii and lengths, to an angle of 180 degrees, while relying solely on the feedback provided by the tactile sensor. This is achieved by developing a novel simulator that accurately models the interaction of a pole with the soft sensor. A feedback policy that is conditioned on a sensory observation history, and which has no prior knowledge of the physical features of the pole, is then learned in the aforementioned simulation. When evaluated on the physical system, the policy is able to swing up a wide range of poles that differ significantly in their physical attributes without further adaptation. To the authors' knowledge, this is the first work where a feedback policy from high-dimensional tactile observations is used to control the swing-up manipulation of poles in closed-loop.
Abstract:The images captured by vision-based tactile sensors carry information about high-resolution tactile fields, such as the distribution of the contact forces applied to their soft sensing surface. However, extracting the information encoded in the images is challenging and often addressed with learning-based approaches, which generally require a large amount of training data. This article proposes a strategy to generate tactile images in simulation for a vision-based tactile sensor based on an internal camera that tracks the motion of spherical particles within a soft material. The deformation of the material is simulated in a finite element environment under a diverse set of contact conditions, and spherical particles are projected to a simulated image. Features extracted from the images are mapped to the 3D contact force distribution, with the ground truth also obtained via finite-element simulations, with an artificial neural network that is therefore entirely trained on synthetic data avoiding the need for real-world data collection. The resulting model exhibits high accuracy when evaluated on real-world tactile images, is transferable across multiple tactile sensors without further training, and is suitable for efficient real-time inference.
Abstract:Sensory feedback is essential for the control of soft robotic systems and to enable deployment in a variety of different tasks. Proprioception refers to sensing the robot's own state and is of crucial importance in order to deploy soft robotic systems outside of laboratory environments, i.e. where no external sensing, such as motion capture systems, is available. A vision-based sensing approach for a soft robotic arm made from fabric is presented, leveraging the high-resolution sensory feedback provided by cameras. No mechanical interaction between the sensor and the soft structure is required and consequently, the compliance of the soft system is preserved. The integration of a camera into an inflatable, fabric-based bellow actuator is discussed. Three actuators, each featuring an integrated camera, are used to control the spherical robotic arm and simultaneously provide sensory feedback of the two rotational degrees of freedom. A convolutional neural network architecture predicts the two angles describing the robot's orientation from the camera images. Ground truth data is provided by a motion capture system during the training phase of the supervised learning approach and its evaluation thereafter. The camera-based sensing approach is able to provide estimates of the orientation in real-time with an accuracy of about one degree. The reliability of the sensing approach is demonstrated by using the sensory feedback to control the orientation of the robotic arm in closed-loop.
Abstract:This paper presents the application of a learning control approach for the realization of a fast and reliable pick-and-place application with a spherical soft robotic arm. The arm is characterized by a lightweight design and exhibits compliant behavior due to the soft materials deployed. A soft, continuum joint is employed, which allows for simultaneous control of one translational and two rotational degrees of freedom in a single joint. This allows us to axially approach and pick an object with the attached suction cup during the pick-and-place application. An analytical mapping based on pressure differences and the antagonistic actuator configuration is introduced, allowing decoupling of the system dynamics and simplifying the modeling and control. A linear parameter-varying model is identified, which is parametrized by the attached load mass and a parameter related to the joint stiffness. A gain-scheduled feedback controller is proposed, which asymptotically stabilizes the robotic system for aggressive tuning and over large variations of the parameters considered. The control architecture is augmented with an iterative learning control scheme enabling accurate tracking of aggressive trajectories involving set point transitions of 60 degrees within 0.3 seconds (no mass attached) to 0.6 seconds (load mass attached). The modeling and control approach proposed results in a reliable realization of a pick-and-place application and is experimentally demonstrated.