Picture for Shuqi Zhao

Shuqi Zhao

DexH2R: Task-oriented Dexterous Manipulation from Human to Robots

Add code
Nov 07, 2024
Viaarxiv icon

X-Drive: Cross-modality consistent multi-sensor data synthesis for driving scenarios

Add code
Nov 02, 2024
Viaarxiv icon

PhyGrasp: Generalizing Robotic Grasping with Physics-informed Large Multimodal Models

Add code
Feb 26, 2024
Figure 1 for PhyGrasp: Generalizing Robotic Grasping with Physics-informed Large Multimodal Models
Figure 2 for PhyGrasp: Generalizing Robotic Grasping with Physics-informed Large Multimodal Models
Figure 3 for PhyGrasp: Generalizing Robotic Grasping with Physics-informed Large Multimodal Models
Figure 4 for PhyGrasp: Generalizing Robotic Grasping with Physics-informed Large Multimodal Models
Viaarxiv icon

Failure-aware Policy Learning for Self-assessable Robotics Tasks

Add code
Feb 25, 2023
Figure 1 for Failure-aware Policy Learning for Self-assessable Robotics Tasks
Figure 2 for Failure-aware Policy Learning for Self-assessable Robotics Tasks
Figure 3 for Failure-aware Policy Learning for Self-assessable Robotics Tasks
Figure 4 for Failure-aware Policy Learning for Self-assessable Robotics Tasks
Viaarxiv icon

A Joint Modeling of Vision-Language-Action for Target-oriented Grasping in Clutter

Add code
Feb 24, 2023
Figure 1 for A Joint Modeling of Vision-Language-Action for Target-oriented Grasping in Clutter
Figure 2 for A Joint Modeling of Vision-Language-Action for Target-oriented Grasping in Clutter
Figure 3 for A Joint Modeling of Vision-Language-Action for Target-oriented Grasping in Clutter
Figure 4 for A Joint Modeling of Vision-Language-Action for Target-oriented Grasping in Clutter
Viaarxiv icon