Picture for Arthur Bucker

Arthur Bucker

Grounding Robot Policies with Visuomotor Language Guidance

Add code
Oct 10, 2024
Figure 1 for Grounding Robot Policies with Visuomotor Language Guidance
Figure 2 for Grounding Robot Policies with Visuomotor Language Guidance
Figure 3 for Grounding Robot Policies with Visuomotor Language Guidance
Figure 4 for Grounding Robot Policies with Visuomotor Language Guidance
Viaarxiv icon

Windows Agent Arena: Evaluating Multi-Modal OS Agents at Scale

Add code
Sep 12, 2024
Figure 1 for Windows Agent Arena: Evaluating Multi-Modal OS Agents at Scale
Figure 2 for Windows Agent Arena: Evaluating Multi-Modal OS Agents at Scale
Figure 3 for Windows Agent Arena: Evaluating Multi-Modal OS Agents at Scale
Figure 4 for Windows Agent Arena: Evaluating Multi-Modal OS Agents at Scale
Viaarxiv icon

LaTTe: Language Trajectory TransformEr

Add code
Aug 09, 2022
Figure 1 for LaTTe: Language Trajectory TransformEr
Figure 2 for LaTTe: Language Trajectory TransformEr
Figure 3 for LaTTe: Language Trajectory TransformEr
Figure 4 for LaTTe: Language Trajectory TransformEr
Viaarxiv icon

Reshaping Robot Trajectories Using Natural Language Commands: A Study of Multi-Modal Data Alignment Using Transformers

Add code
Mar 25, 2022
Figure 1 for Reshaping Robot Trajectories Using Natural Language Commands: A Study of Multi-Modal Data Alignment Using Transformers
Figure 2 for Reshaping Robot Trajectories Using Natural Language Commands: A Study of Multi-Modal Data Alignment Using Transformers
Figure 3 for Reshaping Robot Trajectories Using Natural Language Commands: A Study of Multi-Modal Data Alignment Using Transformers
Figure 4 for Reshaping Robot Trajectories Using Natural Language Commands: A Study of Multi-Modal Data Alignment Using Transformers
Viaarxiv icon

Batteries, camera, action! Learning a semantic control space for expressive robot cinematography

Add code
Nov 19, 2020
Figure 1 for Batteries, camera, action! Learning a semantic control space for expressive robot cinematography
Figure 2 for Batteries, camera, action! Learning a semantic control space for expressive robot cinematography
Figure 3 for Batteries, camera, action! Learning a semantic control space for expressive robot cinematography
Figure 4 for Batteries, camera, action! Learning a semantic control space for expressive robot cinematography
Viaarxiv icon

Do You See What I See? Coordinating Multiple Aerial Cameras for Robot Cinematography

Add code
Nov 10, 2020
Figure 1 for Do You See What I See? Coordinating Multiple Aerial Cameras for Robot Cinematography
Figure 2 for Do You See What I See? Coordinating Multiple Aerial Cameras for Robot Cinematography
Figure 3 for Do You See What I See? Coordinating Multiple Aerial Cameras for Robot Cinematography
Figure 4 for Do You See What I See? Coordinating Multiple Aerial Cameras for Robot Cinematography
Viaarxiv icon