Picture for Malayandi Palan

Malayandi Palan

Learning Reward Functions from Diverse Sources of Human Feedback: Optimally Integrating Demonstrations and Preferences

Add code
Jun 24, 2020
Figure 1 for Learning Reward Functions from Diverse Sources of Human Feedback: Optimally Integrating Demonstrations and Preferences
Figure 2 for Learning Reward Functions from Diverse Sources of Human Feedback: Optimally Integrating Demonstrations and Preferences
Figure 3 for Learning Reward Functions from Diverse Sources of Human Feedback: Optimally Integrating Demonstrations and Preferences
Figure 4 for Learning Reward Functions from Diverse Sources of Human Feedback: Optimally Integrating Demonstrations and Preferences
Viaarxiv icon

Asking Easy Questions: A User-Friendly Approach to Active Reward Learning

Add code
Oct 10, 2019
Figure 1 for Asking Easy Questions: A User-Friendly Approach to Active Reward Learning
Figure 2 for Asking Easy Questions: A User-Friendly Approach to Active Reward Learning
Figure 3 for Asking Easy Questions: A User-Friendly Approach to Active Reward Learning
Figure 4 for Asking Easy Questions: A User-Friendly Approach to Active Reward Learning
Viaarxiv icon

Learning Reward Functions by Integrating Human Demonstrations and Preferences

Add code
Jun 21, 2019
Figure 1 for Learning Reward Functions by Integrating Human Demonstrations and Preferences
Figure 2 for Learning Reward Functions by Integrating Human Demonstrations and Preferences
Figure 3 for Learning Reward Functions by Integrating Human Demonstrations and Preferences
Figure 4 for Learning Reward Functions by Integrating Human Demonstrations and Preferences
Viaarxiv icon