Picture for Francesco Giannini

Francesco Giannini

Faculty of Sciences, Scuola Normale Superiore, Pisa

Position: Explaining Behavioral Shifts in Large Language Models Requires a Comparative Approach

Add code
Feb 02, 2026
Viaarxiv icon

Mixture of Concept Bottleneck Experts

Add code
Feb 02, 2026
Viaarxiv icon

Actionable Interpretability Must Be Defined in Terms of Symmetries

Add code
Jan 19, 2026
Viaarxiv icon

DeepProofLog: Efficient Proving in Deep Stochastic Logic Programs

Add code
Nov 11, 2025
Viaarxiv icon

If Concept Bottlenecks are the Question, are Foundation Models the Answer?

Add code
Apr 29, 2025
Viaarxiv icon

Logic Explanation of AI Classifiers by Categorical Explaining Functors

Add code
Mar 20, 2025
Viaarxiv icon

Deferring Concept Bottleneck Models: Learning to Defer Interventions to Inaccurate Experts

Add code
Mar 20, 2025
Viaarxiv icon

Mathematical Foundation of Interpretable Equivariant Surrogate Models

Add code
Mar 03, 2025
Viaarxiv icon

Neural Interpretable Reasoning

Add code
Feb 17, 2025
Figure 1 for Neural Interpretable Reasoning
Figure 2 for Neural Interpretable Reasoning
Viaarxiv icon

Interpretable Concept-Based Memory Reasoning

Add code
Jul 22, 2024
Figure 1 for Interpretable Concept-Based Memory Reasoning
Figure 2 for Interpretable Concept-Based Memory Reasoning
Figure 3 for Interpretable Concept-Based Memory Reasoning
Figure 4 for Interpretable Concept-Based Memory Reasoning
Viaarxiv icon