Picture for Danny Driess

Danny Driess

Gemini Robotics: Bringing AI into the Physical World

Add code
Mar 25, 2025
Viaarxiv icon

Hi Robot: Open-Ended Instruction Following with Hierarchical Vision-Language-Action Models

Add code
Feb 26, 2025
Viaarxiv icon

FAST: Efficient Action Tokenization for Vision-Language-Action Models

Add code
Jan 16, 2025
Viaarxiv icon

Vision Language Models are In-Context Value Learners

Add code
Nov 07, 2024
Figure 1 for Vision Language Models are In-Context Value Learners
Figure 2 for Vision Language Models are In-Context Value Learners
Figure 3 for Vision Language Models are In-Context Value Learners
Figure 4 for Vision Language Models are In-Context Value Learners
Viaarxiv icon

RT-Affordance: Affordances are Versatile Intermediate Representations for Robot Manipulation

Add code
Nov 05, 2024
Viaarxiv icon

$π_0$: A Vision-Language-Action Flow Model for General Robot Control

Add code
Oct 31, 2024
Figure 1 for $π_0$: A Vision-Language-Action Flow Model for General Robot Control
Figure 2 for $π_0$: A Vision-Language-Action Flow Model for General Robot Control
Figure 3 for $π_0$: A Vision-Language-Action Flow Model for General Robot Control
Figure 4 for $π_0$: A Vision-Language-Action Flow Model for General Robot Control
Viaarxiv icon

ALOHA Unleashed: A Simple Recipe for Robot Dexterity

Add code
Oct 17, 2024
Viaarxiv icon

Vid2Robot: End-to-end Video-conditioned Policy Learning with Cross-Attention Transformers

Add code
Mar 19, 2024
Figure 1 for Vid2Robot: End-to-end Video-conditioned Policy Learning with Cross-Attention Transformers
Figure 2 for Vid2Robot: End-to-end Video-conditioned Policy Learning with Cross-Attention Transformers
Figure 3 for Vid2Robot: End-to-end Video-conditioned Policy Learning with Cross-Attention Transformers
Figure 4 for Vid2Robot: End-to-end Video-conditioned Policy Learning with Cross-Attention Transformers
Viaarxiv icon

PIVOT: Iterative Visual Prompting Elicits Actionable Knowledge for VLMs

Add code
Feb 12, 2024
Figure 1 for PIVOT: Iterative Visual Prompting Elicits Actionable Knowledge for VLMs
Figure 2 for PIVOT: Iterative Visual Prompting Elicits Actionable Knowledge for VLMs
Figure 3 for PIVOT: Iterative Visual Prompting Elicits Actionable Knowledge for VLMs
Figure 4 for PIVOT: Iterative Visual Prompting Elicits Actionable Knowledge for VLMs
Viaarxiv icon

SpatialVLM: Endowing Vision-Language Models with Spatial Reasoning Capabilities

Add code
Jan 22, 2024
Viaarxiv icon