Picture for Haoyi Niu

Haoyi Niu

Skill Expansion and Composition in Parameter Space

Add code
Feb 09, 2025
Viaarxiv icon

Are Expressive Models Truly Necessary for Offline RL?

Add code
Dec 15, 2024
Figure 1 for Are Expressive Models Truly Necessary for Offline RL?
Figure 2 for Are Expressive Models Truly Necessary for Offline RL?
Figure 3 for Are Expressive Models Truly Necessary for Offline RL?
Figure 4 for Are Expressive Models Truly Necessary for Offline RL?
Viaarxiv icon

xTED: Cross-Domain Policy Adaptation via Diffusion-Based Trajectory Editing

Add code
Sep 13, 2024
Viaarxiv icon

Multi-Objective Trajectory Planning with Dual-Encoder

Add code
Mar 26, 2024
Figure 1 for Multi-Objective Trajectory Planning with Dual-Encoder
Figure 2 for Multi-Objective Trajectory Planning with Dual-Encoder
Figure 3 for Multi-Objective Trajectory Planning with Dual-Encoder
Figure 4 for Multi-Objective Trajectory Planning with Dual-Encoder
Viaarxiv icon

DecisionNCE: Embodied Multimodal Representations via Implicit Preference Learning

Add code
Feb 28, 2024
Viaarxiv icon

A Comprehensive Survey of Cross-Domain Policy Transfer for Embodied Agents

Add code
Feb 07, 2024
Figure 1 for A Comprehensive Survey of Cross-Domain Policy Transfer for Embodied Agents
Viaarxiv icon

Stackelberg Driver Model for Continual Policy Improvement in Scenario-Based Closed-Loop Autonomous Driving

Add code
Sep 25, 2023
Viaarxiv icon

Continual Driving Policy Optimization with Closed-Loop Individualized Curricula

Add code
Sep 25, 2023
Viaarxiv icon

H2O+: An Improved Framework for Hybrid Offline-and-Online RL with Dynamics Gaps

Add code
Sep 22, 2023
Viaarxiv icon

(Re)$^2$H2O: Autonomous Driving Scenario Generation via Reversely Regularized Hybrid Offline-and-Online Reinforcement Learning

Add code
Feb 27, 2023
Figure 1 for (Re)$^2$H2O: Autonomous Driving Scenario Generation via Reversely Regularized Hybrid Offline-and-Online Reinforcement Learning
Figure 2 for (Re)$^2$H2O: Autonomous Driving Scenario Generation via Reversely Regularized Hybrid Offline-and-Online Reinforcement Learning
Figure 3 for (Re)$^2$H2O: Autonomous Driving Scenario Generation via Reversely Regularized Hybrid Offline-and-Online Reinforcement Learning
Figure 4 for (Re)$^2$H2O: Autonomous Driving Scenario Generation via Reversely Regularized Hybrid Offline-and-Online Reinforcement Learning
Viaarxiv icon