Picture for Francesco Giannini

Francesco Giannini

Faculty of Sciences, Scuola Normale Superiore, Pisa

Interpretable Concept-Based Memory Reasoning

Add code
Jul 22, 2024
Viaarxiv icon

AnyCBMs: How to Turn Any Black Box into a Concept Bottleneck Model

Add code
May 26, 2024
Figure 1 for AnyCBMs: How to Turn Any Black Box into a Concept Bottleneck Model
Figure 2 for AnyCBMs: How to Turn Any Black Box into a Concept Bottleneck Model
Figure 3 for AnyCBMs: How to Turn Any Black Box into a Concept Bottleneck Model
Figure 4 for AnyCBMs: How to Turn Any Black Box into a Concept Bottleneck Model
Viaarxiv icon

Explainable Malware Detection with Tailored Logic Explained Networks

Add code
May 05, 2024
Viaarxiv icon

Climbing the Ladder of Interpretability with Counterfactual Concept Bottleneck Models

Add code
Feb 02, 2024
Viaarxiv icon

Relational Concept Based Models

Add code
Aug 23, 2023
Figure 1 for Relational Concept Based Models
Figure 2 for Relational Concept Based Models
Figure 3 for Relational Concept Based Models
Figure 4 for Relational Concept Based Models
Viaarxiv icon

Interpretable Neural-Symbolic Concept Reasoning

Add code
Apr 27, 2023
Viaarxiv icon

Categorical Foundations of Explainable AI: A Unifying Formalism of Structures and Semantics

Add code
Apr 27, 2023
Viaarxiv icon

Enhancing Embedding Representations of Biomedical Data using Logic Knowledge

Add code
Mar 23, 2023
Viaarxiv icon

Extending Logic Explained Networks to Text Classification

Add code
Nov 04, 2022
Viaarxiv icon

Concept Embedding Models

Add code
Sep 19, 2022
Figure 1 for Concept Embedding Models
Figure 2 for Concept Embedding Models
Figure 3 for Concept Embedding Models
Figure 4 for Concept Embedding Models
Viaarxiv icon