Picture for Phillip Swazinna

Phillip Swazinna

Learning Control Policies for Variable Objectives from Offline Data

Add code
Aug 11, 2023
Viaarxiv icon

Automatic Trade-off Adaptation in Offline RL

Add code
Jun 16, 2023
Viaarxiv icon

User-Interactive Offline Reinforcement Learning

Add code
May 21, 2022
Figure 1 for User-Interactive Offline Reinforcement Learning
Figure 2 for User-Interactive Offline Reinforcement Learning
Figure 3 for User-Interactive Offline Reinforcement Learning
Figure 4 for User-Interactive Offline Reinforcement Learning
Viaarxiv icon

Comparing Model-free and Model-based Algorithms for Offline Reinforcement Learning

Add code
Jan 14, 2022
Figure 1 for Comparing Model-free and Model-based Algorithms for Offline Reinforcement Learning
Viaarxiv icon

Measuring Data Quality for Dataset Selection in Offline Reinforcement Learning

Add code
Nov 26, 2021
Figure 1 for Measuring Data Quality for Dataset Selection in Offline Reinforcement Learning
Figure 2 for Measuring Data Quality for Dataset Selection in Offline Reinforcement Learning
Viaarxiv icon

Behavior Constraining in Weight Space for Offline Reinforcement Learning

Add code
Jul 12, 2021
Figure 1 for Behavior Constraining in Weight Space for Offline Reinforcement Learning
Figure 2 for Behavior Constraining in Weight Space for Offline Reinforcement Learning
Viaarxiv icon

Overcoming Model Bias for Robust Offline Deep Reinforcement Learning

Add code
Sep 09, 2020
Figure 1 for Overcoming Model Bias for Robust Offline Deep Reinforcement Learning
Figure 2 for Overcoming Model Bias for Robust Offline Deep Reinforcement Learning
Figure 3 for Overcoming Model Bias for Robust Offline Deep Reinforcement Learning
Figure 4 for Overcoming Model Bias for Robust Offline Deep Reinforcement Learning
Viaarxiv icon