Picture for Guillermo A. Pérez

Guillermo A. Pérez

Active Learning of Mealy Machines with Timers

Add code
Mar 04, 2024
Figure 1 for Active Learning of Mealy Machines with Timers
Figure 2 for Active Learning of Mealy Machines with Timers
Figure 3 for Active Learning of Mealy Machines with Timers
Figure 4 for Active Learning of Mealy Machines with Timers
Viaarxiv icon

Synthesis of Hierarchical Controllers Based on Deep Reinforcement Learning Policies

Add code
Feb 21, 2024
Figure 1 for Synthesis of Hierarchical Controllers Based on Deep Reinforcement Learning Policies
Figure 2 for Synthesis of Hierarchical Controllers Based on Deep Reinforcement Learning Policies
Figure 3 for Synthesis of Hierarchical Controllers Based on Deep Reinforcement Learning Policies
Figure 4 for Synthesis of Hierarchical Controllers Based on Deep Reinforcement Learning Policies
Viaarxiv icon

Formally-Sharp DAgger for MCTS: Lower-Latency Monte Carlo Tree Search using Data Aggregation with Formal Methods

Add code
Aug 15, 2023
Viaarxiv icon

Graph-Based Reductions for Parametric and Weighted MDPs

Add code
May 09, 2023
Viaarxiv icon

Wasserstein Auto-encoded MDPs: Formal Verification of Efficiently Distilled RL Policies with Many-sided Guarantees

Add code
Mar 22, 2023
Viaarxiv icon

The Wasserstein Believer: Learning Belief Updates for Partially Observable Environments through Reliable Latent Space Models

Add code
Mar 06, 2023
Figure 1 for The Wasserstein Believer: Learning Belief Updates for Partially Observable Environments through Reliable Latent Space Models
Figure 2 for The Wasserstein Believer: Learning Belief Updates for Partially Observable Environments through Reliable Latent Space Models
Figure 3 for The Wasserstein Believer: Learning Belief Updates for Partially Observable Environments through Reliable Latent Space Models
Figure 4 for The Wasserstein Believer: Learning Belief Updates for Partially Observable Environments through Reliable Latent Space Models
Viaarxiv icon

Distillation of RL Policies with Formal Guarantees via Variational Abstraction of Markov Decision Processes (Technical Report)

Add code
Dec 17, 2021
Figure 1 for Distillation of RL Policies with Formal Guarantees via Variational Abstraction of Markov Decision Processes (Technical Report)
Figure 2 for Distillation of RL Policies with Formal Guarantees via Variational Abstraction of Markov Decision Processes (Technical Report)
Figure 3 for Distillation of RL Policies with Formal Guarantees via Variational Abstraction of Markov Decision Processes (Technical Report)
Figure 4 for Distillation of RL Policies with Formal Guarantees via Variational Abstraction of Markov Decision Processes (Technical Report)
Viaarxiv icon

Safe Learning for Near Optimal Scheduling

Add code
May 19, 2020
Figure 1 for Safe Learning for Near Optimal Scheduling
Figure 2 for Safe Learning for Near Optimal Scheduling
Figure 3 for Safe Learning for Near Optimal Scheduling
Figure 4 for Safe Learning for Near Optimal Scheduling
Viaarxiv icon

Robustness Verification for Classifier Ensembles

Add code
May 12, 2020
Figure 1 for Robustness Verification for Classifier Ensembles
Figure 2 for Robustness Verification for Classifier Ensembles
Figure 3 for Robustness Verification for Classifier Ensembles
Figure 4 for Robustness Verification for Classifier Ensembles
Viaarxiv icon

Let's Agree to Degree: Comparing Graph Convolutional Networks in the Message-Passing Framework

Add code
Apr 06, 2020
Figure 1 for Let's Agree to Degree: Comparing Graph Convolutional Networks in the Message-Passing Framework
Figure 2 for Let's Agree to Degree: Comparing Graph Convolutional Networks in the Message-Passing Framework
Figure 3 for Let's Agree to Degree: Comparing Graph Convolutional Networks in the Message-Passing Framework
Figure 4 for Let's Agree to Degree: Comparing Graph Convolutional Networks in the Message-Passing Framework
Viaarxiv icon