Picture for Maurits Kaptein

Maurits Kaptein

Rethinking Knowledge Transfer in Learning Using Privileged Information

Add code
Aug 26, 2024
Figure 1 for Rethinking Knowledge Transfer in Learning Using Privileged Information
Figure 2 for Rethinking Knowledge Transfer in Learning Using Privileged Information
Figure 3 for Rethinking Knowledge Transfer in Learning Using Privileged Information
Figure 4 for Rethinking Knowledge Transfer in Learning Using Privileged Information
Viaarxiv icon

Efficient Exploration in Average-Reward Constrained Reinforcement Learning: Achieving Near-Optimal Regret With Posterior Sampling

Add code
May 29, 2024
Viaarxiv icon

Provably Efficient Exploration in Constrained Reinforcement Learning:Posterior Sampling Is All You Need

Add code
Sep 27, 2023
Viaarxiv icon

An Empirical Evaluation of Posterior Sampling for Constrained Reinforcement Learning

Add code
Sep 08, 2022
Figure 1 for An Empirical Evaluation of Posterior Sampling for Constrained Reinforcement Learning
Figure 2 for An Empirical Evaluation of Posterior Sampling for Constrained Reinforcement Learning
Figure 3 for An Empirical Evaluation of Posterior Sampling for Constrained Reinforcement Learning
Figure 4 for An Empirical Evaluation of Posterior Sampling for Constrained Reinforcement Learning
Viaarxiv icon

The Impact of Batch Learning in Stochastic Linear Bandits

Add code
Feb 14, 2022
Figure 1 for The Impact of Batch Learning in Stochastic Linear Bandits
Figure 2 for The Impact of Batch Learning in Stochastic Linear Bandits
Viaarxiv icon

The Impact of Batch Learning in Stochastic Bandits

Add code
Nov 03, 2021
Figure 1 for The Impact of Batch Learning in Stochastic Bandits
Figure 2 for The Impact of Batch Learning in Stochastic Bandits
Figure 3 for The Impact of Batch Learning in Stochastic Bandits
Viaarxiv icon

Exploring Offline Policy Evaluation for the Continuous-Armed Bandit Problem

Add code
Aug 21, 2019
Figure 1 for Exploring Offline Policy Evaluation for the Continuous-Armed Bandit Problem
Figure 2 for Exploring Offline Policy Evaluation for the Continuous-Armed Bandit Problem
Figure 3 for Exploring Offline Policy Evaluation for the Continuous-Armed Bandit Problem
Figure 4 for Exploring Offline Policy Evaluation for the Continuous-Armed Bandit Problem
Viaarxiv icon

Continuous-Time Birth-Death MCMC for Bayesian Regression Tree Models

Add code
Apr 19, 2019
Figure 1 for Continuous-Time Birth-Death MCMC for Bayesian Regression Tree Models
Figure 2 for Continuous-Time Birth-Death MCMC for Bayesian Regression Tree Models
Figure 3 for Continuous-Time Birth-Death MCMC for Bayesian Regression Tree Models
Figure 4 for Continuous-Time Birth-Death MCMC for Bayesian Regression Tree Models
Viaarxiv icon

contextual: Evaluating Contextual Multi-Armed Bandit Problems in R

Add code
Nov 08, 2018
Figure 1 for contextual: Evaluating Contextual Multi-Armed Bandit Problems in R
Figure 2 for contextual: Evaluating Contextual Multi-Armed Bandit Problems in R
Figure 3 for contextual: Evaluating Contextual Multi-Armed Bandit Problems in R
Figure 4 for contextual: Evaluating Contextual Multi-Armed Bandit Problems in R
Viaarxiv icon

Maximum likelihood estimation of a finite mixture of logistic regression models in a continuous data stream

Add code
Feb 28, 2018
Figure 1 for Maximum likelihood estimation of a finite mixture of logistic regression models in a continuous data stream
Figure 2 for Maximum likelihood estimation of a finite mixture of logistic regression models in a continuous data stream
Figure 3 for Maximum likelihood estimation of a finite mixture of logistic regression models in a continuous data stream
Figure 4 for Maximum likelihood estimation of a finite mixture of logistic regression models in a continuous data stream
Viaarxiv icon