Picture for Haoran Geng

Haoran Geng

Learning from Massive Human Videos for Universal Humanoid Pose Control

Add code
Dec 18, 2024
Figure 1 for Learning from Massive Human Videos for Universal Humanoid Pose Control
Figure 2 for Learning from Massive Human Videos for Universal Humanoid Pose Control
Figure 3 for Learning from Massive Human Videos for Universal Humanoid Pose Control
Figure 4 for Learning from Massive Human Videos for Universal Humanoid Pose Control
Viaarxiv icon

GAPartManip: A Large-scale Part-centric Dataset for Material-Agnostic Articulated Object Manipulation

Add code
Nov 27, 2024
Viaarxiv icon

DexGraspNet 2.0: Learning Generative Dexterous Grasping in Large-scale Synthetic Cluttered Scenes

Add code
Oct 30, 2024
Figure 1 for DexGraspNet 2.0: Learning Generative Dexterous Grasping in Large-scale Synthetic Cluttered Scenes
Figure 2 for DexGraspNet 2.0: Learning Generative Dexterous Grasping in Large-scale Synthetic Cluttered Scenes
Figure 3 for DexGraspNet 2.0: Learning Generative Dexterous Grasping in Large-scale Synthetic Cluttered Scenes
Figure 4 for DexGraspNet 2.0: Learning Generative Dexterous Grasping in Large-scale Synthetic Cluttered Scenes
Viaarxiv icon

D3RoMa: Disparity Diffusion-based Depth Sensing for Material-Agnostic Robotic Manipulation

Add code
Sep 25, 2024
Figure 1 for D3RoMa: Disparity Diffusion-based Depth Sensing for Material-Agnostic Robotic Manipulation
Figure 2 for D3RoMa: Disparity Diffusion-based Depth Sensing for Material-Agnostic Robotic Manipulation
Figure 3 for D3RoMa: Disparity Diffusion-based Depth Sensing for Material-Agnostic Robotic Manipulation
Figure 4 for D3RoMa: Disparity Diffusion-based Depth Sensing for Material-Agnostic Robotic Manipulation
Viaarxiv icon

PhysPart: Physically Plausible Part Completion for Interactable Objects

Add code
Aug 25, 2024
Figure 1 for PhysPart: Physically Plausible Part Completion for Interactable Objects
Figure 2 for PhysPart: Physically Plausible Part Completion for Interactable Objects
Figure 3 for PhysPart: Physically Plausible Part Completion for Interactable Objects
Figure 4 for PhysPart: Physically Plausible Part Completion for Interactable Objects
Viaarxiv icon

RAM: Retrieval-Based Affordance Transfer for Generalizable Zero-Shot Robotic Manipulation

Add code
Jul 05, 2024
Figure 1 for RAM: Retrieval-Based Affordance Transfer for Generalizable Zero-Shot Robotic Manipulation
Figure 2 for RAM: Retrieval-Based Affordance Transfer for Generalizable Zero-Shot Robotic Manipulation
Figure 3 for RAM: Retrieval-Based Affordance Transfer for Generalizable Zero-Shot Robotic Manipulation
Figure 4 for RAM: Retrieval-Based Affordance Transfer for Generalizable Zero-Shot Robotic Manipulation
Viaarxiv icon

FreeCG: Free the Design Space of Clebsch-Gordan Transform for machine learning force field

Add code
Jul 02, 2024
Viaarxiv icon

Ag2Manip: Learning Novel Manipulation Skills with Agent-Agnostic Visual and Action Representations

Add code
Apr 26, 2024
Figure 1 for Ag2Manip: Learning Novel Manipulation Skills with Agent-Agnostic Visual and Action Representations
Figure 2 for Ag2Manip: Learning Novel Manipulation Skills with Agent-Agnostic Visual and Action Representations
Figure 3 for Ag2Manip: Learning Novel Manipulation Skills with Agent-Agnostic Visual and Action Representations
Figure 4 for Ag2Manip: Learning Novel Manipulation Skills with Agent-Agnostic Visual and Action Representations
Viaarxiv icon

ShapeLLM: Universal 3D Object Understanding for Embodied Interaction

Add code
Mar 06, 2024
Figure 1 for ShapeLLM: Universal 3D Object Understanding for Embodied Interaction
Figure 2 for ShapeLLM: Universal 3D Object Understanding for Embodied Interaction
Figure 3 for ShapeLLM: Universal 3D Object Understanding for Embodied Interaction
Figure 4 for ShapeLLM: Universal 3D Object Understanding for Embodied Interaction
Viaarxiv icon

ManipLLM: Embodied Multimodal Large Language Model for Object-Centric Robotic Manipulation

Add code
Dec 24, 2023
Viaarxiv icon