Picture for Asher J. Hancock

Asher J. Hancock

LAP: Language-Action Pre-Training Enables Zero-shot Cross-Embodiment Transfer

Add code
Feb 15, 2026
Viaarxiv icon

Actions as Language: Fine-Tuning VLMs into VLAs Without Catastrophic Forgetting

Add code
Sep 26, 2025
Figure 1 for Actions as Language: Fine-Tuning VLMs into VLAs Without Catastrophic Forgetting
Figure 2 for Actions as Language: Fine-Tuning VLMs into VLAs Without Catastrophic Forgetting
Figure 3 for Actions as Language: Fine-Tuning VLMs into VLAs Without Catastrophic Forgetting
Figure 4 for Actions as Language: Fine-Tuning VLMs into VLAs Without Catastrophic Forgetting
Viaarxiv icon

Run-time Observation Interventions Make Vision-Language-Action Models More Visually Robust

Add code
Oct 02, 2024
Figure 1 for Run-time Observation Interventions Make Vision-Language-Action Models More Visually Robust
Figure 2 for Run-time Observation Interventions Make Vision-Language-Action Models More Visually Robust
Figure 3 for Run-time Observation Interventions Make Vision-Language-Action Models More Visually Robust
Figure 4 for Run-time Observation Interventions Make Vision-Language-Action Models More Visually Robust
Viaarxiv icon