Picture for Michael Gimelfarb

Michael Gimelfarb

Constraint-Generation Policy Optimization (CGPO): Nonlinear Programming for Policy Optimization in Mixed Discrete-Continuous MDPs

Add code
Jan 20, 2024
Viaarxiv icon

Thompson Sampling for Parameterized Markov Decision Processes with Uninformative Actions

Add code
May 13, 2023
Viaarxiv icon

pyRDDLGym: From RDDL to Gym Environments

Add code
Nov 14, 2022
Viaarxiv icon

Conservative Bayesian Model-Based Value Expansion for Offline Policy Optimization

Add code
Oct 07, 2022
Figure 1 for Conservative Bayesian Model-Based Value Expansion for Offline Policy Optimization
Figure 2 for Conservative Bayesian Model-Based Value Expansion for Offline Policy Optimization
Figure 3 for Conservative Bayesian Model-Based Value Expansion for Offline Policy Optimization
Figure 4 for Conservative Bayesian Model-Based Value Expansion for Offline Policy Optimization
Viaarxiv icon

RAPTOR: End-to-end Risk-Aware MDP Planning and Policy Learning by Backpropagation

Add code
Jun 14, 2021
Figure 1 for RAPTOR: End-to-end Risk-Aware MDP Planning and Policy Learning by Backpropagation
Figure 2 for RAPTOR: End-to-end Risk-Aware MDP Planning and Policy Learning by Backpropagation
Figure 3 for RAPTOR: End-to-end Risk-Aware MDP Planning and Policy Learning by Backpropagation
Figure 4 for RAPTOR: End-to-end Risk-Aware MDP Planning and Policy Learning by Backpropagation
Viaarxiv icon

Risk-Aware Transfer in Reinforcement Learning using Successor Features

Add code
May 28, 2021
Figure 1 for Risk-Aware Transfer in Reinforcement Learning using Successor Features
Figure 2 for Risk-Aware Transfer in Reinforcement Learning using Successor Features
Figure 3 for Risk-Aware Transfer in Reinforcement Learning using Successor Features
Figure 4 for Risk-Aware Transfer in Reinforcement Learning using Successor Features
Viaarxiv icon

ε-BMC: A Bayesian Ensemble Approach to Epsilon-Greedy Exploration in Model-Free Reinforcement Learning

Add code
Jul 02, 2020
Figure 1 for ε-BMC: A Bayesian Ensemble Approach to Epsilon-Greedy Exploration in Model-Free Reinforcement Learning
Figure 2 for ε-BMC: A Bayesian Ensemble Approach to Epsilon-Greedy Exploration in Model-Free Reinforcement Learning
Figure 3 for ε-BMC: A Bayesian Ensemble Approach to Epsilon-Greedy Exploration in Model-Free Reinforcement Learning
Figure 4 for ε-BMC: A Bayesian Ensemble Approach to Epsilon-Greedy Exploration in Model-Free Reinforcement Learning
Viaarxiv icon

Bayesian Experience Reuse for Learning from Multiple Demonstrators

Add code
Jun 10, 2020
Figure 1 for Bayesian Experience Reuse for Learning from Multiple Demonstrators
Figure 2 for Bayesian Experience Reuse for Learning from Multiple Demonstrators
Figure 3 for Bayesian Experience Reuse for Learning from Multiple Demonstrators
Figure 4 for Bayesian Experience Reuse for Learning from Multiple Demonstrators
Viaarxiv icon

Contextual Policy Reuse using Deep Mixture Models

Add code
Feb 29, 2020
Figure 1 for Contextual Policy Reuse using Deep Mixture Models
Figure 2 for Contextual Policy Reuse using Deep Mixture Models
Figure 3 for Contextual Policy Reuse using Deep Mixture Models
Figure 4 for Contextual Policy Reuse using Deep Mixture Models
Viaarxiv icon