Picture for Wonjoon Goo

Wonjoon Goo

Know Your Boundaries: The Necessity of Explicit Behavioral Cloning in Offline RL

Add code
Jun 01, 2022
Figure 1 for Know Your Boundaries: The Necessity of Explicit Behavioral Cloning in Offline RL
Figure 2 for Know Your Boundaries: The Necessity of Explicit Behavioral Cloning in Offline RL
Figure 3 for Know Your Boundaries: The Necessity of Explicit Behavioral Cloning in Offline RL
Figure 4 for Know Your Boundaries: The Necessity of Explicit Behavioral Cloning in Offline RL
Viaarxiv icon

A Ranking Game for Imitation Learning

Add code
Feb 07, 2022
Figure 1 for A Ranking Game for Imitation Learning
Figure 2 for A Ranking Game for Imitation Learning
Figure 3 for A Ranking Game for Imitation Learning
Figure 4 for A Ranking Game for Imitation Learning
Viaarxiv icon

You Only Evaluate Once: a Simple Baseline Algorithm for Offline RL

Add code
Oct 05, 2021
Figure 1 for You Only Evaluate Once: a Simple Baseline Algorithm for Offline RL
Figure 2 for You Only Evaluate Once: a Simple Baseline Algorithm for Offline RL
Figure 3 for You Only Evaluate Once: a Simple Baseline Algorithm for Offline RL
Figure 4 for You Only Evaluate Once: a Simple Baseline Algorithm for Offline RL
Viaarxiv icon

Self-Supervised Online Reward Shaping in Sparse-Reward Environments

Add code
Mar 08, 2021
Figure 1 for Self-Supervised Online Reward Shaping in Sparse-Reward Environments
Viaarxiv icon

Local Nonparametric Meta-Learning

Add code
Feb 09, 2020
Figure 1 for Local Nonparametric Meta-Learning
Figure 2 for Local Nonparametric Meta-Learning
Figure 3 for Local Nonparametric Meta-Learning
Figure 4 for Local Nonparametric Meta-Learning
Viaarxiv icon

Ranking-Based Reward Extrapolation without Rankings

Add code
Jul 13, 2019
Figure 1 for Ranking-Based Reward Extrapolation without Rankings
Figure 2 for Ranking-Based Reward Extrapolation without Rankings
Figure 3 for Ranking-Based Reward Extrapolation without Rankings
Figure 4 for Ranking-Based Reward Extrapolation without Rankings
Viaarxiv icon

Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations

Add code
May 14, 2019
Figure 1 for Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations
Figure 2 for Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations
Figure 3 for Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations
Figure 4 for Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations
Viaarxiv icon

One-Shot Learning of Multi-Step Tasks from Observation via Activity Localization in Auxiliary Video

Add code
Sep 19, 2018
Figure 1 for One-Shot Learning of Multi-Step Tasks from Observation via Activity Localization in Auxiliary Video
Figure 2 for One-Shot Learning of Multi-Step Tasks from Observation via Activity Localization in Auxiliary Video
Figure 3 for One-Shot Learning of Multi-Step Tasks from Observation via Activity Localization in Auxiliary Video
Figure 4 for One-Shot Learning of Multi-Step Tasks from Observation via Activity Localization in Auxiliary Video
Viaarxiv icon