Picture for Hao-Tien Lewis Chiang

Hao-Tien Lewis Chiang

Google

Mobility VLA: Multimodal Instruction Navigation with Long-Context VLMs and Topological Graphs

Add code
Jul 10, 2024
Figure 1 for Mobility VLA: Multimodal Instruction Navigation with Long-Context VLMs and Topological Graphs
Figure 2 for Mobility VLA: Multimodal Instruction Navigation with Long-Context VLMs and Topological Graphs
Figure 3 for Mobility VLA: Multimodal Instruction Navigation with Long-Context VLMs and Topological Graphs
Figure 4 for Mobility VLA: Multimodal Instruction Navigation with Long-Context VLMs and Topological Graphs
Viaarxiv icon

Towards Inferring Users' Impressions of Robot Performance in Navigation Scenarios

Add code
Oct 17, 2023
Viaarxiv icon

Principles and Guidelines for Evaluating Social Robot Navigation Algorithms

Add code
Jun 29, 2023
Figure 1 for Principles and Guidelines for Evaluating Social Robot Navigation Algorithms
Figure 2 for Principles and Guidelines for Evaluating Social Robot Navigation Algorithms
Figure 3 for Principles and Guidelines for Evaluating Social Robot Navigation Algorithms
Figure 4 for Principles and Guidelines for Evaluating Social Robot Navigation Algorithms
Viaarxiv icon

Language to Rewards for Robotic Skill Synthesis

Add code
Jun 16, 2023
Viaarxiv icon

Scene Transformer: A unified multi-task model for behavior prediction and planning

Add code
Jun 15, 2021
Figure 1 for Scene Transformer: A unified multi-task model for behavior prediction and planning
Figure 2 for Scene Transformer: A unified multi-task model for behavior prediction and planning
Figure 3 for Scene Transformer: A unified multi-task model for behavior prediction and planning
Figure 4 for Scene Transformer: A unified multi-task model for behavior prediction and planning
Viaarxiv icon

RL-RRT: Kinodynamic Motion Planning via Learning Reachability Estimators from RL Policies

Add code
Jul 12, 2019
Figure 1 for RL-RRT: Kinodynamic Motion Planning via Learning Reachability Estimators from RL Policies
Figure 2 for RL-RRT: Kinodynamic Motion Planning via Learning Reachability Estimators from RL Policies
Figure 3 for RL-RRT: Kinodynamic Motion Planning via Learning Reachability Estimators from RL Policies
Figure 4 for RL-RRT: Kinodynamic Motion Planning via Learning Reachability Estimators from RL Policies
Viaarxiv icon

Long-Range Indoor Navigation with PRM-RL

Add code
Feb 25, 2019
Figure 1 for Long-Range Indoor Navigation with PRM-RL
Figure 2 for Long-Range Indoor Navigation with PRM-RL
Figure 3 for Long-Range Indoor Navigation with PRM-RL
Figure 4 for Long-Range Indoor Navigation with PRM-RL
Viaarxiv icon

Learning Navigation Behaviors End-to-End with AutoRL

Add code
Feb 01, 2019
Figure 1 for Learning Navigation Behaviors End-to-End with AutoRL
Figure 2 for Learning Navigation Behaviors End-to-End with AutoRL
Figure 3 for Learning Navigation Behaviors End-to-End with AutoRL
Figure 4 for Learning Navigation Behaviors End-to-End with AutoRL
Viaarxiv icon

PEARL: PrEference Appraisal Reinforcement Learning for Motion Planning

Add code
Nov 30, 2018
Figure 1 for PEARL: PrEference Appraisal Reinforcement Learning for Motion Planning
Figure 2 for PEARL: PrEference Appraisal Reinforcement Learning for Motion Planning
Figure 3 for PEARL: PrEference Appraisal Reinforcement Learning for Motion Planning
Figure 4 for PEARL: PrEference Appraisal Reinforcement Learning for Motion Planning
Viaarxiv icon

Deep Neural Networks for Swept Volume Prediction Between Configurations

Add code
May 29, 2018
Figure 1 for Deep Neural Networks for Swept Volume Prediction Between Configurations
Figure 2 for Deep Neural Networks for Swept Volume Prediction Between Configurations
Figure 3 for Deep Neural Networks for Swept Volume Prediction Between Configurations
Figure 4 for Deep Neural Networks for Swept Volume Prediction Between Configurations
Viaarxiv icon