Picture for Bin Fang

Bin Fang

EHC-MM: Embodied Holistic Control for Mobile Manipulation

Add code
Sep 13, 2024
Figure 1 for EHC-MM: Embodied Holistic Control for Mobile Manipulation
Figure 2 for EHC-MM: Embodied Holistic Control for Mobile Manipulation
Figure 3 for EHC-MM: Embodied Holistic Control for Mobile Manipulation
Figure 4 for EHC-MM: Embodied Holistic Control for Mobile Manipulation
Viaarxiv icon

When Vision Meets Touch: A Contemporary Review for Visuotactile Sensors from the Signal Processing Perspective

Add code
Jun 18, 2024
Viaarxiv icon

Touch100k: A Large-Scale Touch-Language-Vision Dataset for Touch-Centric Multimodal Representation

Add code
Jun 06, 2024
Figure 1 for Touch100k: A Large-Scale Touch-Language-Vision Dataset for Touch-Centric Multimodal Representation
Figure 2 for Touch100k: A Large-Scale Touch-Language-Vision Dataset for Touch-Centric Multimodal Representation
Figure 3 for Touch100k: A Large-Scale Touch-Language-Vision Dataset for Touch-Centric Multimodal Representation
Figure 4 for Touch100k: A Large-Scale Touch-Language-Vision Dataset for Touch-Centric Multimodal Representation
Viaarxiv icon

Transformer in Touch: A Survey

Add code
May 21, 2024
Viaarxiv icon

Soft Contact Simulation and Manipulation Learning of Deformable Objects with Vision-based Tactile Sensor

Add code
May 12, 2024
Viaarxiv icon

Simulation of Optical Tactile Sensors Supporting Slip and Rotation using Path Tracing and IMPM

Add code
May 05, 2024
Viaarxiv icon

What Foundation Models can Bring for Robot Learning in Manipulation : A Survey

Add code
Apr 28, 2024
Figure 1 for What Foundation Models can Bring for Robot Learning in Manipulation : A Survey
Figure 2 for What Foundation Models can Bring for Robot Learning in Manipulation : A Survey
Figure 3 for What Foundation Models can Bring for Robot Learning in Manipulation : A Survey
Figure 4 for What Foundation Models can Bring for Robot Learning in Manipulation : A Survey
Viaarxiv icon

Towards Comprehensive Multimodal Perception: Introducing the Touch-Language-Vision Dataset

Add code
Mar 14, 2024
Figure 1 for Towards Comprehensive Multimodal Perception: Introducing the Touch-Language-Vision Dataset
Figure 2 for Towards Comprehensive Multimodal Perception: Introducing the Touch-Language-Vision Dataset
Figure 3 for Towards Comprehensive Multimodal Perception: Introducing the Touch-Language-Vision Dataset
Figure 4 for Towards Comprehensive Multimodal Perception: Introducing the Touch-Language-Vision Dataset
Viaarxiv icon

A Lightweight Parallel Framework for Blind Image Quality Assessment

Add code
Feb 19, 2024
Viaarxiv icon

Hierarchical Visual Policy Learning for Long-Horizon Robot Manipulation in Densely Cluttered Scenes

Add code
Dec 05, 2023
Viaarxiv icon