Picture for Fu-Jen Chu

Fu-Jen Chu

OmniPose6D: Towards Short-Term Object Pose Tracking in Dynamic Scenes from Monocular RGB

Add code
Oct 09, 2024
Figure 1 for OmniPose6D: Towards Short-Term Object Pose Tracking in Dynamic Scenes from Monocular RGB
Figure 2 for OmniPose6D: Towards Short-Term Object Pose Tracking in Dynamic Scenes from Monocular RGB
Figure 3 for OmniPose6D: Towards Short-Term Object Pose Tracking in Dynamic Scenes from Monocular RGB
Figure 4 for OmniPose6D: Towards Short-Term Object Pose Tracking in Dynamic Scenes from Monocular RGB
Viaarxiv icon

Propose, Assess, Search: Harnessing LLMs for Goal-Oriented Planning in Instructional Videos

Add code
Sep 30, 2024
Figure 1 for Propose, Assess, Search: Harnessing LLMs for Goal-Oriented Planning in Instructional Videos
Figure 2 for Propose, Assess, Search: Harnessing LLMs for Goal-Oriented Planning in Instructional Videos
Figure 3 for Propose, Assess, Search: Harnessing LLMs for Goal-Oriented Planning in Instructional Videos
Figure 4 for Propose, Assess, Search: Harnessing LLMs for Goal-Oriented Planning in Instructional Videos
Viaarxiv icon

Unlocking Exocentric Video-Language Data for Egocentric Video Representation Learning

Add code
Aug 07, 2024
Viaarxiv icon

HyperMix: Out-of-Distribution Detection and Classification in Few-Shot Settings

Add code
Dec 22, 2023
Figure 1 for HyperMix: Out-of-Distribution Detection and Classification in Few-Shot Settings
Figure 2 for HyperMix: Out-of-Distribution Detection and Classification in Few-Shot Settings
Figure 3 for HyperMix: Out-of-Distribution Detection and Classification in Few-Shot Settings
Figure 4 for HyperMix: Out-of-Distribution Detection and Classification in Few-Shot Settings
Viaarxiv icon

Ego-Exo4D: Understanding Skilled Human Activity from First- and Third-Person Perspectives

Add code
Nov 30, 2023
Figure 1 for Ego-Exo4D: Understanding Skilled Human Activity from First- and Third-Person Perspectives
Figure 2 for Ego-Exo4D: Understanding Skilled Human Activity from First- and Third-Person Perspectives
Figure 3 for Ego-Exo4D: Understanding Skilled Human Activity from First- and Third-Person Perspectives
Figure 4 for Ego-Exo4D: Understanding Skilled Human Activity from First- and Third-Person Perspectives
Viaarxiv icon

Primitive Shape Recognition for Object Grasping

Add code
Jan 04, 2022
Figure 1 for Primitive Shape Recognition for Object Grasping
Figure 2 for Primitive Shape Recognition for Object Grasping
Figure 3 for Primitive Shape Recognition for Object Grasping
Figure 4 for Primitive Shape Recognition for Object Grasping
Viaarxiv icon

GKNet: grasp keypoint network for grasp candidates detection

Add code
Jun 16, 2021
Figure 1 for GKNet: grasp keypoint network for grasp candidates detection
Figure 2 for GKNet: grasp keypoint network for grasp candidates detection
Figure 3 for GKNet: grasp keypoint network for grasp candidates detection
Figure 4 for GKNet: grasp keypoint network for grasp candidates detection
Viaarxiv icon

Using Synthetic Data and Deep Networks to Recognize Primitive Shapes for Object Grasping

Add code
Sep 12, 2019
Figure 1 for Using Synthetic Data and Deep Networks to Recognize Primitive Shapes for Object Grasping
Figure 2 for Using Synthetic Data and Deep Networks to Recognize Primitive Shapes for Object Grasping
Figure 3 for Using Synthetic Data and Deep Networks to Recognize Primitive Shapes for Object Grasping
Figure 4 for Using Synthetic Data and Deep Networks to Recognize Primitive Shapes for Object Grasping
Viaarxiv icon

Detecting Robotic Affordances on Novel Objects with Regional Attention and Attributes

Add code
Sep 12, 2019
Figure 1 for Detecting Robotic Affordances on Novel Objects with Regional Attention and Attributes
Figure 2 for Detecting Robotic Affordances on Novel Objects with Regional Attention and Attributes
Figure 3 for Detecting Robotic Affordances on Novel Objects with Regional Attention and Attributes
Figure 4 for Detecting Robotic Affordances on Novel Objects with Regional Attention and Attributes
Viaarxiv icon

The Helping Hand: An Assistive Manipulation Framework Using Augmented Reality and a Tongue-Drive Interfaces

Add code
Aug 24, 2018
Figure 1 for The Helping Hand: An Assistive Manipulation Framework Using Augmented Reality and a Tongue-Drive Interfaces
Figure 2 for The Helping Hand: An Assistive Manipulation Framework Using Augmented Reality and a Tongue-Drive Interfaces
Figure 3 for The Helping Hand: An Assistive Manipulation Framework Using Augmented Reality and a Tongue-Drive Interfaces
Viaarxiv icon