Picture for Takahisa Imagawa

Takahisa Imagawa

Unsupervised Discovery of Continuous Skills on a Sphere

Add code
May 25, 2023
Viaarxiv icon

Dropout Q-Functions for Doubly Efficient Reinforcement Learning

Add code
Oct 05, 2021
Figure 1 for Dropout Q-Functions for Doubly Efficient Reinforcement Learning
Figure 2 for Dropout Q-Functions for Doubly Efficient Reinforcement Learning
Figure 3 for Dropout Q-Functions for Doubly Efficient Reinforcement Learning
Figure 4 for Dropout Q-Functions for Doubly Efficient Reinforcement Learning
Viaarxiv icon

Off-Policy Meta-Reinforcement Learning Based on Feature Embedding Spaces

Add code
Jan 06, 2021
Figure 1 for Off-Policy Meta-Reinforcement Learning Based on Feature Embedding Spaces
Figure 2 for Off-Policy Meta-Reinforcement Learning Based on Feature Embedding Spaces
Figure 3 for Off-Policy Meta-Reinforcement Learning Based on Feature Embedding Spaces
Figure 4 for Off-Policy Meta-Reinforcement Learning Based on Feature Embedding Spaces
Viaarxiv icon

Meta-Model-Based Meta-Policy Optimization

Add code
Jun 05, 2020
Figure 1 for Meta-Model-Based Meta-Policy Optimization
Figure 2 for Meta-Model-Based Meta-Policy Optimization
Figure 3 for Meta-Model-Based Meta-Policy Optimization
Figure 4 for Meta-Model-Based Meta-Policy Optimization
Viaarxiv icon

Optimistic Proximal Policy Optimization

Add code
Jun 25, 2019
Figure 1 for Optimistic Proximal Policy Optimization
Figure 2 for Optimistic Proximal Policy Optimization
Figure 3 for Optimistic Proximal Policy Optimization
Figure 4 for Optimistic Proximal Policy Optimization
Viaarxiv icon

Learning Robust Options by Conditional Value at Risk Optimization

Add code
Jun 11, 2019
Figure 1 for Learning Robust Options by Conditional Value at Risk Optimization
Figure 2 for Learning Robust Options by Conditional Value at Risk Optimization
Figure 3 for Learning Robust Options by Conditional Value at Risk Optimization
Figure 4 for Learning Robust Options by Conditional Value at Risk Optimization
Viaarxiv icon

Refining Manually-Designed Symbol Grounding and High-Level Planning by Policy Gradients

Add code
Sep 29, 2018
Figure 1 for Refining Manually-Designed Symbol Grounding and High-Level Planning by Policy Gradients
Figure 2 for Refining Manually-Designed Symbol Grounding and High-Level Planning by Policy Gradients
Figure 3 for Refining Manually-Designed Symbol Grounding and High-Level Planning by Policy Gradients
Figure 4 for Refining Manually-Designed Symbol Grounding and High-Level Planning by Policy Gradients
Viaarxiv icon