Picture for Philip Amortila

Philip Amortila

Reinforcement Learning under Latent Dynamics: Toward Statistical and Algorithmic Modularity

Add code
Oct 23, 2024
Viaarxiv icon

Scalable Online Exploration via Coverability

Add code
Mar 11, 2024
Viaarxiv icon

Mitigating Covariate Shift in Misspecified Regression with Applications to Reinforcement Learning

Add code
Jan 22, 2024
Figure 1 for Mitigating Covariate Shift in Misspecified Regression with Applications to Reinforcement Learning
Viaarxiv icon

Harnessing Density Ratios for Online Reinforcement Learning

Add code
Jan 18, 2024
Viaarxiv icon

The Optimal Approximation Factors in Misspecified Off-Policy Value Function Estimation

Add code
Jul 25, 2023
Viaarxiv icon

A Few Expert Queries Suffices for Sample-Efficient RL with Resets and Linear Value Approximation

Add code
Jul 18, 2022
Figure 1 for A Few Expert Queries Suffices for Sample-Efficient RL with Resets and Linear Value Approximation
Viaarxiv icon

On Query-efficient Planning in MDPs under Linear Realizability of the Optimal State-value Function

Add code
Feb 04, 2021
Viaarxiv icon

A Variant of the Wang-Foster-Kakade Lower Bound for the Discounted Setting

Add code
Nov 04, 2020
Figure 1 for A Variant of the Wang-Foster-Kakade Lower Bound for the Discounted Setting
Viaarxiv icon

Exponential Lower Bounds for Planning in MDPs With Linearly-Realizable Optimal Action-Value Functions

Add code
Oct 03, 2020
Figure 1 for Exponential Lower Bounds for Planning in MDPs With Linearly-Realizable Optimal Action-Value Functions
Figure 2 for Exponential Lower Bounds for Planning in MDPs With Linearly-Realizable Optimal Action-Value Functions
Viaarxiv icon

Constrained Markov Decision Processes via Backward Value Functions

Add code
Aug 26, 2020
Figure 1 for Constrained Markov Decision Processes via Backward Value Functions
Figure 2 for Constrained Markov Decision Processes via Backward Value Functions
Figure 3 for Constrained Markov Decision Processes via Backward Value Functions
Viaarxiv icon