Abstract:We consider the problem where multiple rigid convex polygonal objects rest in randomly placed positions and orientations on a planar surface visible from an overhead camera. The objective is to efficiently grasp and transport all objects into a bin. Specifically, we explore multi-object push-grasps where multiple objects are pushed together before the grasp can occur. We provide necessary conditions for multi-object push-grasps and apply these to filter inadmissible grasps in a novel multi-object grasp planner. We find that our planner is 19 times faster than a Mujoco simulator baseline. We also propose a picking algorithm that uses both single- and multi-object grasps to pick objects. In physical grasping experiments, compared to a single-object picking baseline, we find that the multi-object grasping system achieves 13.6% higher grasp success and is 59.9% faster. See https://sites.google.com/view/multi-object-grasping for videos, code, and data.
Abstract:We present a planning and control framework for physics-based manipulation under uncertainty. The key idea is to interleave robust open-loop execution with closed-loop control. We derive robustness metrics through contraction theory. We use these metrics to plan trajectories that are robust to both state uncertainty and model inaccuracies. However, fully robust trajectories are extremely difficult to find or may not exist for many multi-contact manipulation problems. We separate a trajectory into robust and non-robust segments through a minimum cost path search on a robustness graph. Robust segments are executed open-loop and non-robust segments are executed with model-predictive control. We conduct experiments on a real robotic system for reaching in clutter. Our results suggest that the open and closed-loop approach results in up to 35% more real-world success compared to open-loop baselines and a 40% reduction in execution time compared to model-predictive control. We show for the first time that partially open-loop manipulation plans generated with our approach reach similar success rates to model-predictive control, while achieving a more fluent/real-time execution. A video showing real-robot executions can be found at https://youtu.be/rPOPCwHfV4g.
Abstract:We address the manipulation task of retrieving a target object from a cluttered shelf. When the target object is hidden, the robot must search through the clutter for retrieving it. Solving this task requires reasoning over the likely locations of the target object. It also requires physics reasoning over multi-object interactions and future occlusions. In this work, we present a data-driven approach for generating occlusion-aware actions in closed-loop. We present a hybrid planner that explores likely states generated from a learned distribution over the location of the target object. The search is guided by a heuristic trained with reinforcement learning to evaluate occluded observations. We evaluate our approach in different environments with varying clutter densities and physics parameters. The results validate that our approach can search and retrieve a target object in different physics environments, while only being trained in simulation. It achieves near real-time behaviour with a success rate exceeding 88%.
Abstract:We present a human-guided planner for non-prehensile manipulation in clutter. Most recent approaches to manipulation in clutter employs randomized planning, however, the problem remains a challenging one where the planning times are still in the order of tens of seconds or minutes, and the success rates are low for difficult instances of the problem. We build on these control-based randomized planning approaches, but we investigate using them in conjunction with human-operator input. We show that with a minimal amount of human input, the low-level planner can solve the problem faster and with higher success rates.
Abstract:Humans, in comparison to robots, are remarkably adept at reaching for objects in cluttered environments. The best existing robot planners are based on random sampling of configuration space -- which becomes excessively high-dimensional with large number of objects. Consequently, most planners often fail to efficiently find object manipulation plans in such environments. We addressed this problem by identifying high-level manipulation plans in humans, and transferring these skills to robot planners. We used virtual reality to capture human participants reaching for a target object on a tabletop cluttered with obstacles. From this, we devised a qualitative representation of the task space to abstract the decision making, irrespective of the number of obstacles. Based on this representation, human demonstrations were segmented and used to train decision classifiers. Using these classifiers, our planner produced a list of waypoints in task space. These waypoints provided a high-level plan, which could be transferred to an arbitrary robot model and used to initialise a local trajectory optimiser. We evaluated this approach through testing on unseen human VR data, a physics-based robot simulation, and a real robot (dataset and code are publicly available). We found that the human-like planner outperformed a state-of-the-art standard trajectory optimisation algorithm, and was able to generate effective strategies for rapid planning -- irrespective of the number of obstacles in the environment.
Abstract:This paper presents a novel swarm robotics application of chemotaxis behaviour observed in microorganisms. This approach was used to cause exploration robots to return to a work area around the swarm's nest within a boundless environment. We investigate the performance of our algorithm through extensive simulation studies and hardware validation. Results show that the chemotaxis approach is effective for keeping the swarm close to both stationary and moving nests. Performance comparison of these results with the unrealistic case where a boundary wall was used to keep the swarm within a target search area showed that our chemotaxis approach produced competitive results.
Abstract:Swarm foraging is a common test case application for multi-robot systems. In this paper we present a novel algorithm for controlling swarm robots with limited communication range and storage capacity to efficiently search for and retrieve targets within an unknown environment. In our approach, robots search using random walk and adjust their turn probability based on attraction and repulsion signals they sense from other robots. We compared our algorithm with five different variations reflecting absence or presence of attractive and/or repulsive communication signals. Our results show that best performance is achieved when both signals are used by robots for communication. Furthermore, we show through hardware experiments how the communication model we used in the simulation could be realized on real robots.
Abstract:We propose a human-operator guided planning approach to pushing-based robotic manipulation in clutter. Most recent approaches to this problem employs the power of randomized planning (e.g. control-sampling based Kinodynamic RRT) to produce a fast working solution. We build on these control-based randomized planning approaches, but we investigate using them in conjunction with human-operator input. In our framework, the human operator supplies a highlevel plan, in the form of an ordered sequence of objects and their approximate goal positions. We present experiments in simulation and on a real robotic setup, where we compare the success rate and planning times of our human-in-theloop approach with fully autonomous sampling-based planners. We show that the human-operator provided guidance makes the low-level kinodynamic planner solve the planning problem faster and with higher success rates.
Abstract:Physics-based manipulation in clutter involves complex interaction between multiple objects. In this paper, we consider the problem of learning, from interaction in a physics simulator, manipulation skills to solve this multi-step sequential decision making problem in the real world. Our approach has two key properties: (i) the ability to generalize (over the shape and number of objects in the scene) using an abstract image-based representation that enables a neural network to learn useful features; and (ii) the ability to perform look-ahead planning using a physics simulator, which is essential for such multi-step problems. We show, in sets of simulated and real-world experiments (video available on https://youtu.be/EmkUQfyvwkY), that by learning to evaluate actions in an abstract image-based representation of the real world, the robot can generalize and adapt to the object shapes in challenging real-world environments.
Abstract:We present a method for fast and accurate physics-based predictions during non-prehensile manipulation planning and control. Given an initial state and a sequence of controls, the problem of predicting the resulting sequence of states is a key component of a variety of model-based planning and control algorithms. We propose combining a coarse (i.e. computationally cheap but not very accurate) predictive physics model, with a fine (i.e. computationally expensive but accurate) predictive physics model, to generate a hybrid model that is at the required speed and accuracy for a given manipulation task. Our approach is based on the Parareal algorithm, a parallel-in-time integration method used for computing numerical solutions for general systems of ordinary differential equations. We use Parareal to combine a coarse pushing model with an off-the-shelf physics engine to deliver physics-based predictions that are as accurate as the physics engine but runs in substantially less wall-clock time, thanks to Parareal being amenable to parallelization. We use these physics-based predictions in a model-predictive-control framework based on trajectory optimization, to plan pushing actions that avoid an obstacle and reach a goal location. We show that by combining the two physics models, we can achieve the same success rates as the planner that uses the off-the-shelf physics engine directly, but significantly faster. We present experiments in simulation and on a real robotic setup.