Picture for Wenlong Huang

Wenlong Huang

ReKep: Spatio-Temporal Reasoning of Relational Keypoint Constraints for Robotic Manipulation

Add code
Sep 03, 2024
Figure 1 for ReKep: Spatio-Temporal Reasoning of Relational Keypoint Constraints for Robotic Manipulation
Figure 2 for ReKep: Spatio-Temporal Reasoning of Relational Keypoint Constraints for Robotic Manipulation
Figure 3 for ReKep: Spatio-Temporal Reasoning of Relational Keypoint Constraints for Robotic Manipulation
Figure 4 for ReKep: Spatio-Temporal Reasoning of Relational Keypoint Constraints for Robotic Manipulation
Viaarxiv icon

VoxPoser: Composable 3D Value Maps for Robotic Manipulation with Language Models

Add code
Jul 12, 2023
Viaarxiv icon

PaLM-E: An Embodied Multimodal Language Model

Add code
Mar 06, 2023
Figure 1 for PaLM-E: An Embodied Multimodal Language Model
Figure 2 for PaLM-E: An Embodied Multimodal Language Model
Figure 3 for PaLM-E: An Embodied Multimodal Language Model
Figure 4 for PaLM-E: An Embodied Multimodal Language Model
Viaarxiv icon

Grounded Decoding: Guiding Text Generation with Grounded Models for Robot Control

Add code
Mar 01, 2023
Viaarxiv icon

Code as Policies: Language Model Programs for Embodied Control

Add code
Sep 19, 2022
Figure 1 for Code as Policies: Language Model Programs for Embodied Control
Figure 2 for Code as Policies: Language Model Programs for Embodied Control
Figure 3 for Code as Policies: Language Model Programs for Embodied Control
Figure 4 for Code as Policies: Language Model Programs for Embodied Control
Viaarxiv icon

Inner Monologue: Embodied Reasoning through Planning with Language Models

Add code
Jul 12, 2022
Figure 1 for Inner Monologue: Embodied Reasoning through Planning with Language Models
Figure 2 for Inner Monologue: Embodied Reasoning through Planning with Language Models
Figure 3 for Inner Monologue: Embodied Reasoning through Planning with Language Models
Figure 4 for Inner Monologue: Embodied Reasoning through Planning with Language Models
Viaarxiv icon

Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents

Add code
Jan 18, 2022
Figure 1 for Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents
Figure 2 for Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents
Figure 3 for Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents
Figure 4 for Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents
Viaarxiv icon

Generalization in Dexterous Manipulation via Geometry-Aware Multi-Task Learning

Add code
Nov 04, 2021
Figure 1 for Generalization in Dexterous Manipulation via Geometry-Aware Multi-Task Learning
Figure 2 for Generalization in Dexterous Manipulation via Geometry-Aware Multi-Task Learning
Figure 3 for Generalization in Dexterous Manipulation via Geometry-Aware Multi-Task Learning
Figure 4 for Generalization in Dexterous Manipulation via Geometry-Aware Multi-Task Learning
Viaarxiv icon

One Policy to Control Them All: Shared Modular Policies for Agent-Agnostic Control

Add code
Jul 09, 2020
Figure 1 for One Policy to Control Them All: Shared Modular Policies for Agent-Agnostic Control
Figure 2 for One Policy to Control Them All: Shared Modular Policies for Agent-Agnostic Control
Figure 3 for One Policy to Control Them All: Shared Modular Policies for Agent-Agnostic Control
Figure 4 for One Policy to Control Them All: Shared Modular Policies for Agent-Agnostic Control
Viaarxiv icon