Abstract:Current multimodal and multitask foundation models like 4M or UnifiedIO show promising results, but in practice their out-of-the-box abilities to accept diverse inputs and perform diverse tasks are limited by the (usually rather small) number of modalities and tasks they are trained on. In this paper, we expand upon the capabilities of them by training a single model on tens of highly diverse modalities and by performing co-training on large-scale multimodal datasets and text corpora. This includes training on several semantic and geometric modalities, feature maps from recent state of the art models like DINOv2 and ImageBind, pseudo labels of specialist models like SAM and 4DHumans, and a range of new modalities that allow for novel ways to interact with the model and steer the generation, for example image metadata or color palettes. A crucial step in this process is performing discrete tokenization on various modalities, whether they are image-like, neural network feature maps, vectors, structured data like instance segmentation or human poses, or data that can be represented as text. Through this, we expand on the out-of-the-box capabilities of multimodal models and specifically show the possibility of training one model to solve at least 3x more tasks/modalities than existing ones and doing so without a loss in performance. This enables more fine-grained and controllable multimodal generation capabilities and allows us to study the distillation of models trained on diverse data and objectives into a unified model. We successfully scale the training to a three billion parameter model using tens of modalities and different datasets. The resulting models and training code are open sourced at 4m.epfl.ch.
Abstract:We consider the problem of building an assistive robotic system that can help humans in daily household cleanup tasks. Creating such an autonomous system in real-world environments is inherently quite challenging, as a general solution may not suit the preferences of a particular customer. Moreover, such a system consists of multi-objective tasks comprising -- (i) Detection of misplaced objects and prediction of their potentially correct placements, (ii) Fine-grained manipulation for stable object grasping, and (iii) Room-to-room navigation for transferring objects in unseen environments. This work systematically tackles each component and integrates them into a complete object rearrangement pipeline. To validate our proposed system, we conduct multiple experiments on a real robotic platform involving multi-room object transfer, user preference-based placement, and complex pick-and-place tasks. Project page: https://sites.google.com/eng.ucsd.edu/home-robot
Abstract:Due to their complexity, foliated structure problems often pose intricate challenges to task and motion planning in robotics manipulation. To counter this, our study presents the ``Foliated Repetition Roadmap.'' This roadmap assists task and motion planners by transforming the complex foliated structure problem into a more accessible graph format. By leveraging query experiences from different foliated manifolds, our framework can dynamically and efficiently update this graph. The refined graph can generate distribution sets, optimizing motion planning performance in foliated structure problems. In our paper, we lay down the theoretical groundwork and illustrate its practical applications through real-world examples.
Abstract:Nowadays, a number of grasping algorithms have been proposed, that can predict a candidate of grasp poses, even for unseen objects. This enables a robotic manipulator to pick-and-place such objects. However, some of the predicted grasp poses to stably lift a target object may not be directly approachable due to workspace limitations. In such cases, the robot will need to re-grasp the desired object to enable successful grasping on it. This involves planning a sequence of continuous actions such as sliding, re-grasping, and transferring. To address this multi-modal problem, we propose a Markov-Decision Process-based multi-modal planner that can rearrange the object into a position suitable for stable manipulation. We demonstrate improved performance in both simulation and the real world for pick-and-place tasks.
Abstract:The World Robot Summit (WRS) 2020 Assembly Challenge is designed to allow teams to demonstrate how one can build flexible, robust systems for assembly of machined objects. We present our approach to assembly based on integration of machine vision, robust planning and execution using behavior trees and a hierarchy of recovery strategies to ensure robust operation. Our system was selected for the WRS 2020 Assembly Challenge finals based on robust performance in the qualifying rounds. We present the systems approach adopted for the challenge.
Abstract:The World Robotics Challenge (2018 & 2020) was designed to challenge teams to design systems that are easy to adapt to new tasks and to ensure robust operation in a semi-structured environment. We present a layered strategy to transform missions into tasks and actions and provide a set of strategies to address simple and complex failures. We propose a model for characterizing failures using this model and discuss repairs. Simple failures are by far the most common in our WRC system and we also present how we repaired them.
Abstract:In the robotic industry, specular and textureless metallic components are ubiquitous. The 6D pose estimation of such objects with only a monocular RGB camera is difficult because of the absence of rich texture features. Furthermore, the appearance of specularity heavily depends on the camera viewpoint and environmental light conditions making traditional methods, like template matching, fail. In the last 30 years, pose estimation of the specular object has been a consistent challenge, and most related works require massive knowledge modeling effort for light setups, environment, or the object surface. On the other hand, recent works exhibit the feasibility of 6D pose estimation on a monocular camera with convolutional neural networks(CNNs) however they mostly use opaque objects for evaluation. This paper provides a data-driven solution to estimate the 6D pose of specular objects for grasping them, proposes a cost function for handling symmetry, and demonstrates experimental results showing the system's feasibility.