Picture for Carlos Florensa

Carlos Florensa

Which Mutual-Information Representation Learning Objectives are Sufficient for Control?

Add code
Jun 14, 2021
Figure 1 for Which Mutual-Information Representation Learning Objectives are Sufficient for Control?
Figure 2 for Which Mutual-Information Representation Learning Objectives are Sufficient for Control?
Figure 3 for Which Mutual-Information Representation Learning Objectives are Sufficient for Control?
Figure 4 for Which Mutual-Information Representation Learning Objectives are Sufficient for Control?
Viaarxiv icon

Guided Uncertainty-Aware Policy Optimization: Combining Learning and Model-Based Strategies for Sample-Efficient Policy Learning

Add code
May 26, 2020
Figure 1 for Guided Uncertainty-Aware Policy Optimization: Combining Learning and Model-Based Strategies for Sample-Efficient Policy Learning
Figure 2 for Guided Uncertainty-Aware Policy Optimization: Combining Learning and Model-Based Strategies for Sample-Efficient Policy Learning
Figure 3 for Guided Uncertainty-Aware Policy Optimization: Combining Learning and Model-Based Strategies for Sample-Efficient Policy Learning
Viaarxiv icon

Goal-conditioned Imitation Learning

Add code
Jun 13, 2019
Figure 1 for Goal-conditioned Imitation Learning
Figure 2 for Goal-conditioned Imitation Learning
Figure 3 for Goal-conditioned Imitation Learning
Figure 4 for Goal-conditioned Imitation Learning
Viaarxiv icon

Sub-policy Adaptation for Hierarchical Reinforcement Learning

Add code
Jun 13, 2019
Figure 1 for Sub-policy Adaptation for Hierarchical Reinforcement Learning
Figure 2 for Sub-policy Adaptation for Hierarchical Reinforcement Learning
Figure 3 for Sub-policy Adaptation for Hierarchical Reinforcement Learning
Figure 4 for Sub-policy Adaptation for Hierarchical Reinforcement Learning
Viaarxiv icon

Adaptive Variance for Changing Sparse-Reward Environments

Add code
Mar 15, 2019
Figure 1 for Adaptive Variance for Changing Sparse-Reward Environments
Figure 2 for Adaptive Variance for Changing Sparse-Reward Environments
Figure 3 for Adaptive Variance for Changing Sparse-Reward Environments
Figure 4 for Adaptive Variance for Changing Sparse-Reward Environments
Viaarxiv icon

Self-supervised Learning of Image Embedding for Continuous Control

Add code
Jan 03, 2019
Figure 1 for Self-supervised Learning of Image Embedding for Continuous Control
Figure 2 for Self-supervised Learning of Image Embedding for Continuous Control
Figure 3 for Self-supervised Learning of Image Embedding for Continuous Control
Figure 4 for Self-supervised Learning of Image Embedding for Continuous Control
Viaarxiv icon

Reverse Curriculum Generation for Reinforcement Learning

Add code
Jul 23, 2018
Figure 1 for Reverse Curriculum Generation for Reinforcement Learning
Figure 2 for Reverse Curriculum Generation for Reinforcement Learning
Figure 3 for Reverse Curriculum Generation for Reinforcement Learning
Figure 4 for Reverse Curriculum Generation for Reinforcement Learning
Viaarxiv icon

Automatic Goal Generation for Reinforcement Learning Agents

Add code
Jul 23, 2018
Figure 1 for Automatic Goal Generation for Reinforcement Learning Agents
Figure 2 for Automatic Goal Generation for Reinforcement Learning Agents
Figure 3 for Automatic Goal Generation for Reinforcement Learning Agents
Figure 4 for Automatic Goal Generation for Reinforcement Learning Agents
Viaarxiv icon

Stochastic Neural Networks for Hierarchical Reinforcement Learning

Add code
Apr 10, 2017
Figure 1 for Stochastic Neural Networks for Hierarchical Reinforcement Learning
Figure 2 for Stochastic Neural Networks for Hierarchical Reinforcement Learning
Figure 3 for Stochastic Neural Networks for Hierarchical Reinforcement Learning
Figure 4 for Stochastic Neural Networks for Hierarchical Reinforcement Learning
Viaarxiv icon