Picture for Junghyun Kim

Junghyun Kim

CLIP-RT: Learning Language-Conditioned Robotic Policies from Natural Language Supervision

Add code
Nov 01, 2024
Viaarxiv icon

Socratic Planner: Inquiry-Based Zero-Shot Planning for Embodied Instruction Following

Add code
Apr 21, 2024
Viaarxiv icon

PGA: Personalizing Grasping Agents with Single Human-Robot Interaction

Add code
Oct 19, 2023
Figure 1 for PGA: Personalizing Grasping Agents with Single Human-Robot Interaction
Figure 2 for PGA: Personalizing Grasping Agents with Single Human-Robot Interaction
Figure 3 for PGA: Personalizing Grasping Agents with Single Human-Robot Interaction
Figure 4 for PGA: Personalizing Grasping Agents with Single Human-Robot Interaction
Viaarxiv icon

PROGrasp: Pragmatic Human-Robot Communication for Object Grasping

Add code
Sep 14, 2023
Viaarxiv icon

GVCCI: Lifelong Learning of Visual Grounding for Language-Guided Robotic Manipulation

Add code
Jul 12, 2023
Figure 1 for GVCCI: Lifelong Learning of Visual Grounding for Language-Guided Robotic Manipulation
Figure 2 for GVCCI: Lifelong Learning of Visual Grounding for Language-Guided Robotic Manipulation
Figure 3 for GVCCI: Lifelong Learning of Visual Grounding for Language-Guided Robotic Manipulation
Figure 4 for GVCCI: Lifelong Learning of Visual Grounding for Language-Guided Robotic Manipulation
Viaarxiv icon

Structured World Belief for Reinforcement Learning in POMDP

Add code
Jul 19, 2021
Figure 1 for Structured World Belief for Reinforcement Learning in POMDP
Figure 2 for Structured World Belief for Reinforcement Learning in POMDP
Figure 3 for Structured World Belief for Reinforcement Learning in POMDP
Figure 4 for Structured World Belief for Reinforcement Learning in POMDP
Viaarxiv icon