Picture for Ana L. C. Bazzan

Ana L. C. Bazzan

Sample-Efficient Multi-Objective Learning via Generalized Policy Improvement Prioritization

Add code
Jan 18, 2023
Figure 1 for Sample-Efficient Multi-Objective Learning via Generalized Policy Improvement Prioritization
Figure 2 for Sample-Efficient Multi-Objective Learning via Generalized Policy Improvement Prioritization
Figure 3 for Sample-Efficient Multi-Objective Learning via Generalized Policy Improvement Prioritization
Figure 4 for Sample-Efficient Multi-Objective Learning via Generalized Policy Improvement Prioritization
Viaarxiv icon

Optimistic Linear Support and Successor Features as a Basis for Optimal Policy Transfer

Add code
Jun 22, 2022
Figure 1 for Optimistic Linear Support and Successor Features as a Basis for Optimal Policy Transfer
Figure 2 for Optimistic Linear Support and Successor Features as a Basis for Optimal Policy Transfer
Figure 3 for Optimistic Linear Support and Successor Features as a Basis for Optimal Policy Transfer
Figure 4 for Optimistic Linear Support and Successor Features as a Basis for Optimal Policy Transfer
Viaarxiv icon

Improving Urban Mobility: using artificial intelligence and new technologies to connect supply and demand

Add code
Mar 18, 2022
Figure 1 for Improving Urban Mobility: using artificial intelligence and new technologies to connect supply and demand
Figure 2 for Improving Urban Mobility: using artificial intelligence and new technologies to connect supply and demand
Figure 3 for Improving Urban Mobility: using artificial intelligence and new technologies to connect supply and demand
Viaarxiv icon

Minimum-Delay Adaptation in Non-Stationary Reinforcement Learning via Online High-Confidence Change-Point Detection

Add code
May 20, 2021
Figure 1 for Minimum-Delay Adaptation in Non-Stationary Reinforcement Learning via Online High-Confidence Change-Point Detection
Figure 2 for Minimum-Delay Adaptation in Non-Stationary Reinforcement Learning via Online High-Confidence Change-Point Detection
Figure 3 for Minimum-Delay Adaptation in Non-Stationary Reinforcement Learning via Online High-Confidence Change-Point Detection
Figure 4 for Minimum-Delay Adaptation in Non-Stationary Reinforcement Learning via Online High-Confidence Change-Point Detection
Viaarxiv icon

Quantitatively Assessing the Benefits of Model-driven Development in Agent-based Modeling and Simulation

Add code
Jun 15, 2020
Figure 1 for Quantitatively Assessing the Benefits of Model-driven Development in Agent-based Modeling and Simulation
Figure 2 for Quantitatively Assessing the Benefits of Model-driven Development in Agent-based Modeling and Simulation
Figure 3 for Quantitatively Assessing the Benefits of Model-driven Development in Agent-based Modeling and Simulation
Figure 4 for Quantitatively Assessing the Benefits of Model-driven Development in Agent-based Modeling and Simulation
Viaarxiv icon

Quantifying the Impact of Non-Stationarity in Reinforcement Learning-Based Traffic Signal Control

Add code
Apr 09, 2020
Figure 1 for Quantifying the Impact of Non-Stationarity in Reinforcement Learning-Based Traffic Signal Control
Figure 2 for Quantifying the Impact of Non-Stationarity in Reinforcement Learning-Based Traffic Signal Control
Figure 3 for Quantifying the Impact of Non-Stationarity in Reinforcement Learning-Based Traffic Signal Control
Figure 4 for Quantifying the Impact of Non-Stationarity in Reinforcement Learning-Based Traffic Signal Control
Viaarxiv icon

Temporal Network Analysis of Literary Texts

Add code
Feb 22, 2016
Figure 1 for Temporal Network Analysis of Literary Texts
Figure 2 for Temporal Network Analysis of Literary Texts
Figure 3 for Temporal Network Analysis of Literary Texts
Figure 4 for Temporal Network Analysis of Literary Texts
Viaarxiv icon