Picture for Jia Zeng

Jia Zeng

GRMLR: Knowledge-Enhanced Small-Data Learning for Deep-Sea Cold Seep Stage Inference

Add code
Mar 25, 2026
Viaarxiv icon

ForceVLA2: Unleashing Hybrid Force-Position Control with Force Awareness for Contact-Rich Manipulation

Add code
Mar 16, 2026
Viaarxiv icon

FutureVLA: Joint Visuomotor Prediction for Vision-Language-Action Model

Add code
Mar 11, 2026
Viaarxiv icon

UltraDexGrasp: Learning Universal Dexterous Grasping for Bimanual Robots with Synthetic Data

Add code
Mar 05, 2026
Viaarxiv icon

Robo3R: Enhancing Robotic Manipulation with Accurate Feed-Forward 3D Reconstruction

Add code
Feb 10, 2026
Viaarxiv icon

Nimbus: A Unified Embodied Synthetic Data Generation Framework

Add code
Jan 29, 2026
Viaarxiv icon

InternVLA-A1: Unifying Understanding, Generation and Action for Robotic Manipulation

Add code
Jan 05, 2026
Viaarxiv icon

FastUMI-100K: Advancing Data-driven Robotic Manipulation with a Large-scale UMI-style Dataset

Add code
Oct 09, 2025
Viaarxiv icon

SimpleVLA-RL: Scaling VLA Training via Reinforcement Learning

Add code
Sep 11, 2025
Viaarxiv icon

F1: A Vision-Language-Action Model Bridging Understanding and Generation to Actions

Add code
Sep 09, 2025
Figure 1 for F1: A Vision-Language-Action Model Bridging Understanding and Generation to Actions
Figure 2 for F1: A Vision-Language-Action Model Bridging Understanding and Generation to Actions
Figure 3 for F1: A Vision-Language-Action Model Bridging Understanding and Generation to Actions
Figure 4 for F1: A Vision-Language-Action Model Bridging Understanding and Generation to Actions
Viaarxiv icon