Picture for Moo Jin Kim

Moo Jin Kim

OpenVLA: An Open-Source Vision-Language-Action Model

Add code
Jun 13, 2024
Figure 1 for OpenVLA: An Open-Source Vision-Language-Action Model
Figure 2 for OpenVLA: An Open-Source Vision-Language-Action Model
Figure 3 for OpenVLA: An Open-Source Vision-Language-Action Model
Figure 4 for OpenVLA: An Open-Source Vision-Language-Action Model
Viaarxiv icon

Open X-Embodiment: Robotic Learning Datasets and RT-X Models

Add code
Oct 17, 2023
Figure 1 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 2 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 3 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 4 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Viaarxiv icon

BridgeData V2: A Dataset for Robot Learning at Scale

Add code
Aug 24, 2023
Viaarxiv icon

Giving Robots a Hand: Learning Generalizable Manipulation with Eye-in-Hand Human Video Demonstrations

Add code
Jul 12, 2023
Viaarxiv icon

NeRF in the Palm of Your Hand: Corrective Augmentation for Robotics via Novel-View Synthesis

Add code
Jan 18, 2023
Viaarxiv icon

Vision-Based Manipulators Need to Also See from Their Hands

Add code
Mar 15, 2022
Figure 1 for Vision-Based Manipulators Need to Also See from Their Hands
Figure 2 for Vision-Based Manipulators Need to Also See from Their Hands
Figure 3 for Vision-Based Manipulators Need to Also See from Their Hands
Figure 4 for Vision-Based Manipulators Need to Also See from Their Hands
Viaarxiv icon