Picture for Coline Devin

Coline Devin

Open X-Embodiment: Robotic Learning Datasets and RT-X Models

Add code
Oct 17, 2023
Figure 1 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 2 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 3 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 4 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Viaarxiv icon

RoboCat: A Self-Improving Foundation Agent for Robotic Manipulation

Add code
Jun 20, 2023
Viaarxiv icon

How to Spend Your Robot Time: Bridging Kickstarting and Offline Reinforcement Learning for Vision-based Robotic Manipulation

Add code
May 06, 2022
Figure 1 for How to Spend Your Robot Time: Bridging Kickstarting and Offline Reinforcement Learning for Vision-based Robotic Manipulation
Figure 2 for How to Spend Your Robot Time: Bridging Kickstarting and Offline Reinforcement Learning for Vision-based Robotic Manipulation
Figure 3 for How to Spend Your Robot Time: Bridging Kickstarting and Offline Reinforcement Learning for Vision-based Robotic Manipulation
Figure 4 for How to Spend Your Robot Time: Bridging Kickstarting and Offline Reinforcement Learning for Vision-based Robotic Manipulation
Viaarxiv icon

Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes

Add code
Nov 03, 2021
Figure 1 for Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes
Figure 2 for Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes
Figure 3 for Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes
Figure 4 for Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes
Viaarxiv icon

Fully Autonomous Real-World Reinforcement Learning for Mobile Manipulation

Add code
Aug 03, 2021
Figure 1 for Fully Autonomous Real-World Reinforcement Learning for Mobile Manipulation
Figure 2 for Fully Autonomous Real-World Reinforcement Learning for Mobile Manipulation
Figure 3 for Fully Autonomous Real-World Reinforcement Learning for Mobile Manipulation
Figure 4 for Fully Autonomous Real-World Reinforcement Learning for Mobile Manipulation
Viaarxiv icon

Modularity Improves Out-of-Domain Instruction Following

Add code
Oct 24, 2020
Figure 1 for Modularity Improves Out-of-Domain Instruction Following
Figure 2 for Modularity Improves Out-of-Domain Instruction Following
Figure 3 for Modularity Improves Out-of-Domain Instruction Following
Figure 4 for Modularity Improves Out-of-Domain Instruction Following
Viaarxiv icon

Self-Supervised Goal-Conditioned Pick and Place

Add code
Aug 26, 2020
Figure 1 for Self-Supervised Goal-Conditioned Pick and Place
Figure 2 for Self-Supervised Goal-Conditioned Pick and Place
Figure 3 for Self-Supervised Goal-Conditioned Pick and Place
Figure 4 for Self-Supervised Goal-Conditioned Pick and Place
Viaarxiv icon

Learning To Reach Goals Without Reinforcement Learning

Add code
Dec 13, 2019
Figure 1 for Learning To Reach Goals Without Reinforcement Learning
Figure 2 for Learning To Reach Goals Without Reinforcement Learning
Figure 3 for Learning To Reach Goals Without Reinforcement Learning
Figure 4 for Learning To Reach Goals Without Reinforcement Learning
Viaarxiv icon

SMiRL: Surprise Minimizing RL in Dynamic Environments

Add code
Dec 11, 2019
Figure 1 for SMiRL: Surprise Minimizing RL in Dynamic Environments
Figure 2 for SMiRL: Surprise Minimizing RL in Dynamic Environments
Figure 3 for SMiRL: Surprise Minimizing RL in Dynamic Environments
Figure 4 for SMiRL: Surprise Minimizing RL in Dynamic Environments
Viaarxiv icon

Plan Arithmetic: Compositional Plan Vectors for Multi-Task Control

Add code
Oct 30, 2019
Figure 1 for Plan Arithmetic: Compositional Plan Vectors for Multi-Task Control
Figure 2 for Plan Arithmetic: Compositional Plan Vectors for Multi-Task Control
Figure 3 for Plan Arithmetic: Compositional Plan Vectors for Multi-Task Control
Figure 4 for Plan Arithmetic: Compositional Plan Vectors for Multi-Task Control
Viaarxiv icon