Picture for Yantian Zha

Yantian Zha

NatSGD: A Dataset with Speech, Gestures, and Demonstrations for Robot Learning in Natural Human-Robot Interaction

Add code
Mar 04, 2024
Figure 1 for NatSGD: A Dataset with Speech, Gestures, and Demonstrations for Robot Learning in Natural Human-Robot Interaction
Figure 2 for NatSGD: A Dataset with Speech, Gestures, and Demonstrations for Robot Learning in Natural Human-Robot Interaction
Figure 3 for NatSGD: A Dataset with Speech, Gestures, and Demonstrations for Robot Learning in Natural Human-Robot Interaction
Figure 4 for NatSGD: A Dataset with Speech, Gestures, and Demonstrations for Robot Learning in Natural Human-Robot Interaction
Viaarxiv icon

"Task Success" is not Enough: Investigating the Use of Video-Language Models as Behavior Critics for Catching Undesirable Agent Behaviors

Add code
Feb 06, 2024
Viaarxiv icon

Learning from Ambiguous Demonstrations with Self-Explanation Guided Reinforcement Learning

Add code
Oct 14, 2021
Figure 1 for Learning from Ambiguous Demonstrations with Self-Explanation Guided Reinforcement Learning
Figure 2 for Learning from Ambiguous Demonstrations with Self-Explanation Guided Reinforcement Learning
Figure 3 for Learning from Ambiguous Demonstrations with Self-Explanation Guided Reinforcement Learning
Viaarxiv icon

Symbols as a Lingua Franca for Bridging Human-AI Chasm for Explainable and Advisable AI Systems

Add code
Sep 21, 2021
Figure 1 for Symbols as a Lingua Franca for Bridging Human-AI Chasm for Explainable and Advisable AI Systems
Viaarxiv icon

Contrastively Learning Visual Attention as Affordance Cues from Demonstrations for Robotic Grasping

Add code
Apr 11, 2021
Figure 1 for Contrastively Learning Visual Attention as Affordance Cues from Demonstrations for Robotic Grasping
Figure 2 for Contrastively Learning Visual Attention as Affordance Cues from Demonstrations for Robotic Grasping
Figure 3 for Contrastively Learning Visual Attention as Affordance Cues from Demonstrations for Robotic Grasping
Figure 4 for Contrastively Learning Visual Attention as Affordance Cues from Demonstrations for Robotic Grasping
Viaarxiv icon

Explicablility as Minimizing Distance from Expected Behavior

Add code
Mar 13, 2019
Figure 1 for Explicablility as Minimizing Distance from Expected Behavior
Figure 2 for Explicablility as Minimizing Distance from Expected Behavior
Figure 3 for Explicablility as Minimizing Distance from Expected Behavior
Figure 4 for Explicablility as Minimizing Distance from Expected Behavior
Viaarxiv icon

Plan-Recognition-Driven Attention Modeling for Visual Recognition

Add code
Dec 02, 2018
Figure 1 for Plan-Recognition-Driven Attention Modeling for Visual Recognition
Figure 2 for Plan-Recognition-Driven Attention Modeling for Visual Recognition
Figure 3 for Plan-Recognition-Driven Attention Modeling for Visual Recognition
Figure 4 for Plan-Recognition-Driven Attention Modeling for Visual Recognition
Viaarxiv icon

Discovering Underlying Plans Based on Shallow Models

Add code
Mar 04, 2018
Figure 1 for Discovering Underlying Plans Based on Shallow Models
Figure 2 for Discovering Underlying Plans Based on Shallow Models
Figure 3 for Discovering Underlying Plans Based on Shallow Models
Figure 4 for Discovering Underlying Plans Based on Shallow Models
Viaarxiv icon

Recognizing Plans by Learning Embeddings from Observed Action Distributions

Add code
Dec 05, 2017
Figure 1 for Recognizing Plans by Learning Embeddings from Observed Action Distributions
Figure 2 for Recognizing Plans by Learning Embeddings from Observed Action Distributions
Figure 3 for Recognizing Plans by Learning Embeddings from Observed Action Distributions
Figure 4 for Recognizing Plans by Learning Embeddings from Observed Action Distributions
Viaarxiv icon