Picture for Lucas Lehnert

Lucas Lehnert

Beyond A*: Better Planning with Transformers via Search Dynamics Bootstrapping

Add code
Feb 21, 2024
Viaarxiv icon

Maximum State Entropy Exploration using Predecessor and Successor Representations

Add code
Jun 26, 2023
Figure 1 for Maximum State Entropy Exploration using Predecessor and Successor Representations
Figure 2 for Maximum State Entropy Exploration using Predecessor and Successor Representations
Figure 3 for Maximum State Entropy Exploration using Predecessor and Successor Representations
Figure 4 for Maximum State Entropy Exploration using Predecessor and Successor Representations
Viaarxiv icon

IQL-TD-MPC: Implicit Q-Learning for Hierarchical Model Predictive Control

Add code
Jun 01, 2023
Viaarxiv icon

Reward-Predictive Clustering

Add code
Nov 07, 2022
Figure 1 for Reward-Predictive Clustering
Figure 2 for Reward-Predictive Clustering
Figure 3 for Reward-Predictive Clustering
Figure 4 for Reward-Predictive Clustering
Viaarxiv icon

Successor Features Support Model-based and Model-free Reinforcement Learning

Add code
Jan 31, 2019
Figure 1 for Successor Features Support Model-based and Model-free Reinforcement Learning
Figure 2 for Successor Features Support Model-based and Model-free Reinforcement Learning
Figure 3 for Successor Features Support Model-based and Model-free Reinforcement Learning
Figure 4 for Successor Features Support Model-based and Model-free Reinforcement Learning
Viaarxiv icon

Mitigating Planner Overfitting in Model-Based Reinforcement Learning

Add code
Dec 03, 2018
Figure 1 for Mitigating Planner Overfitting in Model-Based Reinforcement Learning
Figure 2 for Mitigating Planner Overfitting in Model-Based Reinforcement Learning
Figure 3 for Mitigating Planner Overfitting in Model-Based Reinforcement Learning
Figure 4 for Mitigating Planner Overfitting in Model-Based Reinforcement Learning
Viaarxiv icon

Transfer with Model Features in Reinforcement Learning

Add code
Jul 04, 2018
Figure 1 for Transfer with Model Features in Reinforcement Learning
Figure 2 for Transfer with Model Features in Reinforcement Learning
Figure 3 for Transfer with Model Features in Reinforcement Learning
Figure 4 for Transfer with Model Features in Reinforcement Learning
Viaarxiv icon

Advantages and Limitations of using Successor Features for Transfer in Reinforcement Learning

Add code
Jul 31, 2017
Figure 1 for Advantages and Limitations of using Successor Features for Transfer in Reinforcement Learning
Figure 2 for Advantages and Limitations of using Successor Features for Transfer in Reinforcement Learning
Figure 3 for Advantages and Limitations of using Successor Features for Transfer in Reinforcement Learning
Figure 4 for Advantages and Limitations of using Successor Features for Transfer in Reinforcement Learning
Viaarxiv icon

Policy Gradient Methods for Off-policy Control

Add code
Dec 13, 2015
Figure 1 for Policy Gradient Methods for Off-policy Control
Figure 2 for Policy Gradient Methods for Off-policy Control
Viaarxiv icon