Picture for Felix Leibfried

Felix Leibfried

Variational Inference for Model-Free and Model-Based Reinforcement Learning

Add code
Sep 04, 2022
Figure 1 for Variational Inference for Model-Free and Model-Based Reinforcement Learning
Figure 2 for Variational Inference for Model-Free and Model-Based Reinforcement Learning
Figure 3 for Variational Inference for Model-Free and Model-Based Reinforcement Learning
Figure 4 for Variational Inference for Model-Free and Model-Based Reinforcement Learning
Viaarxiv icon

Bellman: A Toolbox for Model-Based Reinforcement Learning in TensorFlow

Add code
Apr 13, 2021
Figure 1 for Bellman: A Toolbox for Model-Based Reinforcement Learning in TensorFlow
Viaarxiv icon

GPflux: A Library for Deep Gaussian Processes

Add code
Apr 12, 2021
Figure 1 for GPflux: A Library for Deep Gaussian Processes
Viaarxiv icon

A Tutorial on Sparse Gaussian Processes and Variational Inference

Add code
Feb 02, 2021
Figure 1 for A Tutorial on Sparse Gaussian Processes and Variational Inference
Figure 2 for A Tutorial on Sparse Gaussian Processes and Variational Inference
Figure 3 for A Tutorial on Sparse Gaussian Processes and Variational Inference
Figure 4 for A Tutorial on Sparse Gaussian Processes and Variational Inference
Viaarxiv icon

Mutual-Information Regularization in Markov Decision Processes and Actor-Critic Learning

Add code
Sep 11, 2019
Figure 1 for Mutual-Information Regularization in Markov Decision Processes and Actor-Critic Learning
Figure 2 for Mutual-Information Regularization in Markov Decision Processes and Actor-Critic Learning
Figure 3 for Mutual-Information Regularization in Markov Decision Processes and Actor-Critic Learning
Figure 4 for Mutual-Information Regularization in Markov Decision Processes and Actor-Critic Learning
Viaarxiv icon

A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment

Add code
Sep 09, 2019
Figure 1 for A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment
Figure 2 for A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment
Figure 3 for A Unified Bellman Optimality Principle Combining Reward Maximization and Empowerment
Viaarxiv icon

Model-Based Stabilisation of Deep Reinforcement Learning

Add code
Sep 06, 2018
Figure 1 for Model-Based Stabilisation of Deep Reinforcement Learning
Figure 2 for Model-Based Stabilisation of Deep Reinforcement Learning
Figure 3 for Model-Based Stabilisation of Deep Reinforcement Learning
Figure 4 for Model-Based Stabilisation of Deep Reinforcement Learning
Viaarxiv icon

Regularised Deep Reinforcement Learning with Guaranteed Convergence

Add code
Sep 06, 2018
Figure 1 for Regularised Deep Reinforcement Learning with Guaranteed Convergence
Figure 2 for Regularised Deep Reinforcement Learning with Guaranteed Convergence
Figure 3 for Regularised Deep Reinforcement Learning with Guaranteed Convergence
Figure 4 for Regularised Deep Reinforcement Learning with Guaranteed Convergence
Viaarxiv icon

An information-theoretic on-line update principle for perception-action coupling

Add code
Apr 16, 2018
Figure 1 for An information-theoretic on-line update principle for perception-action coupling
Figure 2 for An information-theoretic on-line update principle for perception-action coupling
Figure 3 for An information-theoretic on-line update principle for perception-action coupling
Figure 4 for An information-theoretic on-line update principle for perception-action coupling
Viaarxiv icon

Balancing Two-Player Stochastic Games with Soft Q-Learning

Add code
Feb 09, 2018
Figure 1 for Balancing Two-Player Stochastic Games with Soft Q-Learning
Figure 2 for Balancing Two-Player Stochastic Games with Soft Q-Learning
Figure 3 for Balancing Two-Player Stochastic Games with Soft Q-Learning
Figure 4 for Balancing Two-Player Stochastic Games with Soft Q-Learning
Viaarxiv icon