Picture for Peter Vamplew

Peter Vamplew

Adaptive Alignment: Dynamic Preference Adjustments via Multi-Objective Reinforcement Learning for Pluralistic AI

Add code
Oct 31, 2024
Viaarxiv icon

Multi-objective Reinforcement Learning: A Tool for Pluralistic Alignment

Add code
Oct 15, 2024
Viaarxiv icon

Value function interference and greedy action selection in value-based multi-objective reinforcement learning

Add code
Feb 09, 2024
Viaarxiv icon

Utility-Based Reinforcement Learning: Unifying Single-objective and Multi-objective Reinforcement Learning

Add code
Feb 05, 2024
Viaarxiv icon

An Empirical Investigation of Value-Based Multi-objective Reinforcement Learning for Stochastic Environments

Add code
Jan 06, 2024
Viaarxiv icon

Intent-aligned AI systems deplete human agency: the need for agency foundations research in AI safety

Add code
May 30, 2023
Viaarxiv icon

Broad-persistent Advice for Interactive Reinforcement Learning Scenarios

Add code
Oct 11, 2022
Figure 1 for Broad-persistent Advice for Interactive Reinforcement Learning Scenarios
Figure 2 for Broad-persistent Advice for Interactive Reinforcement Learning Scenarios
Figure 3 for Broad-persistent Advice for Interactive Reinforcement Learning Scenarios
Figure 4 for Broad-persistent Advice for Interactive Reinforcement Learning Scenarios
Viaarxiv icon

Elastic Step DQN: A novel multi-step algorithm to alleviate overestimation in Deep QNetworks

Add code
Oct 07, 2022
Figure 1 for Elastic Step DQN: A novel multi-step algorithm to alleviate overestimation in Deep QNetworks
Figure 2 for Elastic Step DQN: A novel multi-step algorithm to alleviate overestimation in Deep QNetworks
Figure 3 for Elastic Step DQN: A novel multi-step algorithm to alleviate overestimation in Deep QNetworks
Figure 4 for Elastic Step DQN: A novel multi-step algorithm to alleviate overestimation in Deep QNetworks
Viaarxiv icon

Evaluating Human-like Explanations for Robot Actions in Reinforcement Learning Scenarios

Add code
Jul 07, 2022
Figure 1 for Evaluating Human-like Explanations for Robot Actions in Reinforcement Learning Scenarios
Figure 2 for Evaluating Human-like Explanations for Robot Actions in Reinforcement Learning Scenarios
Figure 3 for Evaluating Human-like Explanations for Robot Actions in Reinforcement Learning Scenarios
Figure 4 for Evaluating Human-like Explanations for Robot Actions in Reinforcement Learning Scenarios
Viaarxiv icon

Explainable Reinforcement Learning for Broad-XAI: A Conceptual Framework and Survey

Add code
Aug 20, 2021
Figure 1 for Explainable Reinforcement Learning for Broad-XAI: A Conceptual Framework and Survey
Figure 2 for Explainable Reinforcement Learning for Broad-XAI: A Conceptual Framework and Survey
Figure 3 for Explainable Reinforcement Learning for Broad-XAI: A Conceptual Framework and Survey
Figure 4 for Explainable Reinforcement Learning for Broad-XAI: A Conceptual Framework and Survey
Viaarxiv icon