Picture for Haoyuan Deng

Haoyuan Deng

E2HiL: Entropy-Guided Sample Selection for Efficient Real-World Human-in-the-Loop Reinforcement Learning

Add code
Jan 27, 2026
Viaarxiv icon

NORA-1.5: A Vision-Language-Action Model Trained using World Model- and Action-based Preference Rewards

Add code
Nov 18, 2025
Figure 1 for NORA-1.5: A Vision-Language-Action Model Trained using World Model- and Action-based Preference Rewards
Figure 2 for NORA-1.5: A Vision-Language-Action Model Trained using World Model- and Action-based Preference Rewards
Figure 3 for NORA-1.5: A Vision-Language-Action Model Trained using World Model- and Action-based Preference Rewards
Figure 4 for NORA-1.5: A Vision-Language-Action Model Trained using World Model- and Action-based Preference Rewards
Viaarxiv icon

MAP-VLA: Memory-Augmented Prompting for Vision-Language-Action Model in Robotic Manipulation

Add code
Nov 12, 2025
Viaarxiv icon

VLA-Reasoner: Empowering Vision-Language-Action Models with Reasoning via Online Monte Carlo Tree Search

Add code
Sep 26, 2025
Viaarxiv icon

SafeBimanual: Diffusion-based Trajectory Optimization for Safe Bimanual Manipulation

Add code
Aug 25, 2025
Viaarxiv icon

ManiGaussian++: General Robotic Bimanual Manipulation with Hierarchical Gaussian World Model

Add code
Jun 24, 2025
Viaarxiv icon

AnyBimanual: Transferring Unimanual Policy for General Bimanual Manipulation

Add code
Dec 09, 2024
Viaarxiv icon