Picture for Joey Hejna

Joey Hejna

Vision Language Models are In-Context Value Learners

Add code
Nov 07, 2024
Figure 1 for Vision Language Models are In-Context Value Learners
Figure 2 for Vision Language Models are In-Context Value Learners
Figure 3 for Vision Language Models are In-Context Value Learners
Figure 4 for Vision Language Models are In-Context Value Learners
Viaarxiv icon

So You Think You Can Scale Up Autonomous Robot Data Collection?

Add code
Nov 04, 2024
Viaarxiv icon

MotIF: Motion Instruction Fine-tuning

Add code
Sep 16, 2024
Viaarxiv icon

Re-Mix: Optimizing Data Mixtures for Large Scale Imitation Learning

Add code
Aug 26, 2024
Figure 1 for Re-Mix: Optimizing Data Mixtures for Large Scale Imitation Learning
Figure 2 for Re-Mix: Optimizing Data Mixtures for Large Scale Imitation Learning
Figure 3 for Re-Mix: Optimizing Data Mixtures for Large Scale Imitation Learning
Figure 4 for Re-Mix: Optimizing Data Mixtures for Large Scale Imitation Learning
Viaarxiv icon

Show, Don't Tell: Aligning Language Models with Demonstrated Feedback

Add code
Jun 02, 2024
Viaarxiv icon

Octo: An Open-Source Generalist Robot Policy

Add code
May 20, 2024
Viaarxiv icon

From $r$ to $Q^*$: Your Language Model is Secretly a Q-Function

Add code
Apr 18, 2024
Viaarxiv icon

DROID: A Large-Scale In-The-Wild Robot Manipulation Dataset

Add code
Mar 19, 2024
Figure 1 for DROID: A Large-Scale In-The-Wild Robot Manipulation Dataset
Figure 2 for DROID: A Large-Scale In-The-Wild Robot Manipulation Dataset
Figure 3 for DROID: A Large-Scale In-The-Wild Robot Manipulation Dataset
Figure 4 for DROID: A Large-Scale In-The-Wild Robot Manipulation Dataset
Viaarxiv icon

Contrastive Preference Learning: Learning from Human Feedback without RL

Add code
Oct 24, 2023
Viaarxiv icon

Improving Long-Horizon Imitation Through Instruction Prediction

Add code
Jun 21, 2023
Viaarxiv icon