Picture for Christopher Grimm

Christopher Grimm

Proper Value Equivalence

Add code
Jun 18, 2021
Figure 1 for Proper Value Equivalence
Figure 2 for Proper Value Equivalence
Figure 3 for Proper Value Equivalence
Figure 4 for Proper Value Equivalence
Viaarxiv icon

Warping of Radar Data into Camera Image for Cross-Modal Supervision in Automotive Applications

Add code
Dec 23, 2020
Figure 1 for Warping of Radar Data into Camera Image for Cross-Modal Supervision in Automotive Applications
Figure 2 for Warping of Radar Data into Camera Image for Cross-Modal Supervision in Automotive Applications
Figure 3 for Warping of Radar Data into Camera Image for Cross-Modal Supervision in Automotive Applications
Figure 4 for Warping of Radar Data into Camera Image for Cross-Modal Supervision in Automotive Applications
Viaarxiv icon

The Value Equivalence Principle for Model-Based Reinforcement Learning

Add code
Nov 06, 2020
Figure 1 for The Value Equivalence Principle for Model-Based Reinforcement Learning
Figure 2 for The Value Equivalence Principle for Model-Based Reinforcement Learning
Figure 3 for The Value Equivalence Principle for Model-Based Reinforcement Learning
Figure 4 for The Value Equivalence Principle for Model-Based Reinforcement Learning
Viaarxiv icon

Disentangled Cumulants Help Successor Representations Transfer to New Tasks

Add code
Nov 25, 2019
Figure 1 for Disentangled Cumulants Help Successor Representations Transfer to New Tasks
Figure 2 for Disentangled Cumulants Help Successor Representations Transfer to New Tasks
Figure 3 for Disentangled Cumulants Help Successor Representations Transfer to New Tasks
Figure 4 for Disentangled Cumulants Help Successor Representations Transfer to New Tasks
Viaarxiv icon

Learning Independently-Obtainable Reward Functions

Add code
Jan 31, 2019
Figure 1 for Learning Independently-Obtainable Reward Functions
Figure 2 for Learning Independently-Obtainable Reward Functions
Figure 3 for Learning Independently-Obtainable Reward Functions
Figure 4 for Learning Independently-Obtainable Reward Functions
Viaarxiv icon

Mitigating Planner Overfitting in Model-Based Reinforcement Learning

Add code
Dec 03, 2018
Figure 1 for Mitigating Planner Overfitting in Model-Based Reinforcement Learning
Figure 2 for Mitigating Planner Overfitting in Model-Based Reinforcement Learning
Figure 3 for Mitigating Planner Overfitting in Model-Based Reinforcement Learning
Figure 4 for Mitigating Planner Overfitting in Model-Based Reinforcement Learning
Viaarxiv icon

Deep Abstract Q-Networks

Add code
Aug 25, 2018
Figure 1 for Deep Abstract Q-Networks
Figure 2 for Deep Abstract Q-Networks
Figure 3 for Deep Abstract Q-Networks
Figure 4 for Deep Abstract Q-Networks
Viaarxiv icon

Modeling Latent Attention Within Neural Networks

Add code
Dec 30, 2017
Figure 1 for Modeling Latent Attention Within Neural Networks
Figure 2 for Modeling Latent Attention Within Neural Networks
Figure 3 for Modeling Latent Attention Within Neural Networks
Figure 4 for Modeling Latent Attention Within Neural Networks
Viaarxiv icon

Learning Approximate Stochastic Transition Models

Add code
Oct 26, 2017
Figure 1 for Learning Approximate Stochastic Transition Models
Figure 2 for Learning Approximate Stochastic Transition Models
Figure 3 for Learning Approximate Stochastic Transition Models
Figure 4 for Learning Approximate Stochastic Transition Models
Viaarxiv icon

Summable Reparameterizations of Wasserstein Critics in the One-Dimensional Setting

Add code
Sep 19, 2017
Figure 1 for Summable Reparameterizations of Wasserstein Critics in the One-Dimensional Setting
Figure 2 for Summable Reparameterizations of Wasserstein Critics in the One-Dimensional Setting
Figure 3 for Summable Reparameterizations of Wasserstein Critics in the One-Dimensional Setting
Figure 4 for Summable Reparameterizations of Wasserstein Critics in the One-Dimensional Setting
Viaarxiv icon