Picture for Francesco Giannini

Francesco Giannini

Faculty of Sciences, Scuola Normale Superiore, Pisa

Deferring Concept Bottleneck Models: Learning to Defer Interventions to Inaccurate Experts

Add code
Mar 20, 2025
Viaarxiv icon

Logic Explanation of AI Classifiers by Categorical Explaining Functors

Add code
Mar 20, 2025
Viaarxiv icon

Mathematical Foundation of Interpretable Equivariant Surrogate Models

Add code
Mar 03, 2025
Viaarxiv icon

Neural Interpretable Reasoning

Add code
Feb 17, 2025
Viaarxiv icon

Interpretable Concept-Based Memory Reasoning

Add code
Jul 22, 2024
Figure 1 for Interpretable Concept-Based Memory Reasoning
Figure 2 for Interpretable Concept-Based Memory Reasoning
Figure 3 for Interpretable Concept-Based Memory Reasoning
Figure 4 for Interpretable Concept-Based Memory Reasoning
Viaarxiv icon

AnyCBMs: How to Turn Any Black Box into a Concept Bottleneck Model

Add code
May 26, 2024
Figure 1 for AnyCBMs: How to Turn Any Black Box into a Concept Bottleneck Model
Figure 2 for AnyCBMs: How to Turn Any Black Box into a Concept Bottleneck Model
Figure 3 for AnyCBMs: How to Turn Any Black Box into a Concept Bottleneck Model
Figure 4 for AnyCBMs: How to Turn Any Black Box into a Concept Bottleneck Model
Viaarxiv icon

Explainable Malware Detection with Tailored Logic Explained Networks

Add code
May 05, 2024
Viaarxiv icon

Climbing the Ladder of Interpretability with Counterfactual Concept Bottleneck Models

Add code
Feb 02, 2024
Viaarxiv icon

Relational Concept Based Models

Add code
Aug 23, 2023
Figure 1 for Relational Concept Based Models
Figure 2 for Relational Concept Based Models
Figure 3 for Relational Concept Based Models
Figure 4 for Relational Concept Based Models
Viaarxiv icon

Categorical Foundations of Explainable AI: A Unifying Formalism of Structures and Semantics

Add code
Apr 27, 2023
Viaarxiv icon