Picture for Tianhe Yu

Tianhe Yu

Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context

Add code
Mar 08, 2024
Viaarxiv icon

RT-Sketch: Goal-Conditioned Imitation Learning from Hand-Drawn Sketches

Add code
Mar 05, 2024
Viaarxiv icon

Gemini: A Family of Highly Capable Multimodal Models

Add code
Dec 19, 2023
Viaarxiv icon

Open X-Embodiment: Robotic Learning Datasets and RT-X Models

Add code
Oct 17, 2023
Figure 1 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 2 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 3 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 4 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Viaarxiv icon

Video Language Planning

Add code
Oct 16, 2023
Viaarxiv icon

Q-Transformer: Scalable Offline Reinforcement Learning via Autoregressive Q-Functions

Add code
Sep 18, 2023
Viaarxiv icon

RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control

Add code
Jul 28, 2023
Viaarxiv icon

Contrastive Example-Based Control

Add code
Jul 24, 2023
Viaarxiv icon

Train Offline, Test Online: A Real Robot Learning Benchmark

Add code
Jun 01, 2023
Figure 1 for Train Offline, Test Online: A Real Robot Learning Benchmark
Figure 2 for Train Offline, Test Online: A Real Robot Learning Benchmark
Figure 3 for Train Offline, Test Online: A Real Robot Learning Benchmark
Figure 4 for Train Offline, Test Online: A Real Robot Learning Benchmark
Viaarxiv icon

PaLM-E: An Embodied Multimodal Language Model

Add code
Mar 06, 2023
Figure 1 for PaLM-E: An Embodied Multimodal Language Model
Figure 2 for PaLM-E: An Embodied Multimodal Language Model
Figure 3 for PaLM-E: An Embodied Multimodal Language Model
Figure 4 for PaLM-E: An Embodied Multimodal Language Model
Viaarxiv icon