Picture for Roberto Capobianco

Roberto Capobianco

DeepDFA: Automata Learning through Neural Probabilistic Relaxations

Add code
Aug 16, 2024
Viaarxiv icon

Neural Reward Machines

Add code
Aug 16, 2024
Figure 1 for Neural Reward Machines
Figure 2 for Neural Reward Machines
Figure 3 for Neural Reward Machines
Figure 4 for Neural Reward Machines
Viaarxiv icon

Towards a fuller understanding of neurons with Clustered Compositional Explanations

Add code
Oct 27, 2023
Viaarxiv icon

Detection Accuracy for Evaluating Compositional Explanations of Units

Add code
Sep 16, 2021
Figure 1 for Detection Accuracy for Evaluating Compositional Explanations of Units
Figure 2 for Detection Accuracy for Evaluating Compositional Explanations of Units
Figure 3 for Detection Accuracy for Evaluating Compositional Explanations of Units
Figure 4 for Detection Accuracy for Evaluating Compositional Explanations of Units
Viaarxiv icon

Memory Wrap: a Data-Efficient and Interpretable Extension to Image Classification Models

Add code
Jun 04, 2021
Figure 1 for Memory Wrap: a Data-Efficient and Interpretable Extension to Image Classification Models
Figure 2 for Memory Wrap: a Data-Efficient and Interpretable Extension to Image Classification Models
Figure 3 for Memory Wrap: a Data-Efficient and Interpretable Extension to Image Classification Models
Figure 4 for Memory Wrap: a Data-Efficient and Interpretable Extension to Image Classification Models
Viaarxiv icon

Reinforcement Learning for Optimization of COVID-19 Mitigation policies

Add code
Oct 20, 2020
Figure 1 for Reinforcement Learning for Optimization of COVID-19 Mitigation policies
Figure 2 for Reinforcement Learning for Optimization of COVID-19 Mitigation policies
Figure 3 for Reinforcement Learning for Optimization of COVID-19 Mitigation policies
Figure 4 for Reinforcement Learning for Optimization of COVID-19 Mitigation policies
Viaarxiv icon

DOP: Deep Optimistic Planning with Approximate Value Function Evaluation

Add code
Mar 22, 2018
Figure 1 for DOP: Deep Optimistic Planning with Approximate Value Function Evaluation
Figure 2 for DOP: Deep Optimistic Planning with Approximate Value Function Evaluation
Figure 3 for DOP: Deep Optimistic Planning with Approximate Value Function Evaluation
Figure 4 for DOP: Deep Optimistic Planning with Approximate Value Function Evaluation
Viaarxiv icon

Q-CP: Learning Action Values for Cooperative Planning

Add code
Mar 01, 2018
Figure 1 for Q-CP: Learning Action Values for Cooperative Planning
Figure 2 for Q-CP: Learning Action Values for Cooperative Planning
Figure 3 for Q-CP: Learning Action Values for Cooperative Planning
Figure 4 for Q-CP: Learning Action Values for Cooperative Planning
Viaarxiv icon

Learning Human-Robot Handovers Through $π$-STAM: Policy Improvement With Spatio-Temporal Affordance Maps

Add code
Oct 15, 2016
Figure 1 for Learning Human-Robot Handovers Through $π$-STAM: Policy Improvement With Spatio-Temporal Affordance Maps
Figure 2 for Learning Human-Robot Handovers Through $π$-STAM: Policy Improvement With Spatio-Temporal Affordance Maps
Figure 3 for Learning Human-Robot Handovers Through $π$-STAM: Policy Improvement With Spatio-Temporal Affordance Maps
Figure 4 for Learning Human-Robot Handovers Through $π$-STAM: Policy Improvement With Spatio-Temporal Affordance Maps
Viaarxiv icon

STAM: A Framework for Spatio-Temporal Affordance Maps

Add code
Jul 01, 2016
Figure 1 for STAM: A Framework for Spatio-Temporal Affordance Maps
Figure 2 for STAM: A Framework for Spatio-Temporal Affordance Maps
Figure 3 for STAM: A Framework for Spatio-Temporal Affordance Maps
Figure 4 for STAM: A Framework for Spatio-Temporal Affordance Maps
Viaarxiv icon