Picture for Xavier Renard

Xavier Renard

Post-processing fairness with minimal changes

Add code
Aug 27, 2024
Viaarxiv icon

Dynamic Interpretability for Model Comparison via Decision Rules

Add code
Sep 29, 2023
Viaarxiv icon

How to choose an Explainability Method? Towards a Methodical Implementation of XAI in Practice

Add code
Jul 09, 2021
Figure 1 for How to choose an Explainability Method? Towards a Methodical Implementation of XAI in Practice
Viaarxiv icon

Understanding surrogate explanations: the interplay between complexity, fidelity and coverage

Add code
Jul 09, 2021
Figure 1 for Understanding surrogate explanations: the interplay between complexity, fidelity and coverage
Figure 2 for Understanding surrogate explanations: the interplay between complexity, fidelity and coverage
Figure 3 for Understanding surrogate explanations: the interplay between complexity, fidelity and coverage
Figure 4 for Understanding surrogate explanations: the interplay between complexity, fidelity and coverage
Viaarxiv icon

On the overlooked issue of defining explanation objectives for local-surrogate explainers

Add code
Jun 10, 2021
Figure 1 for On the overlooked issue of defining explanation objectives for local-surrogate explainers
Figure 2 for On the overlooked issue of defining explanation objectives for local-surrogate explainers
Figure 3 for On the overlooked issue of defining explanation objectives for local-surrogate explainers
Viaarxiv icon

Understanding Prediction Discrepancies in Machine Learning Classifiers

Add code
Apr 12, 2021
Figure 1 for Understanding Prediction Discrepancies in Machine Learning Classifiers
Figure 2 for Understanding Prediction Discrepancies in Machine Learning Classifiers
Figure 3 for Understanding Prediction Discrepancies in Machine Learning Classifiers
Figure 4 for Understanding Prediction Discrepancies in Machine Learning Classifiers
Viaarxiv icon

QUACKIE: A NLP Classification Task With Ground Truth Explanations

Add code
Dec 27, 2020
Figure 1 for QUACKIE: A NLP Classification Task With Ground Truth Explanations
Figure 2 for QUACKIE: A NLP Classification Task With Ground Truth Explanations
Figure 3 for QUACKIE: A NLP Classification Task With Ground Truth Explanations
Figure 4 for QUACKIE: A NLP Classification Task With Ground Truth Explanations
Viaarxiv icon

Sentence-Based Model Agnostic NLP Interpretability

Add code
Dec 27, 2020
Figure 1 for Sentence-Based Model Agnostic NLP Interpretability
Figure 2 for Sentence-Based Model Agnostic NLP Interpretability
Figure 3 for Sentence-Based Model Agnostic NLP Interpretability
Figure 4 for Sentence-Based Model Agnostic NLP Interpretability
Viaarxiv icon

Imperceptible Adversarial Attacks on Tabular Data

Add code
Dec 13, 2019
Figure 1 for Imperceptible Adversarial Attacks on Tabular Data
Figure 2 for Imperceptible Adversarial Attacks on Tabular Data
Figure 3 for Imperceptible Adversarial Attacks on Tabular Data
Figure 4 for Imperceptible Adversarial Attacks on Tabular Data
Viaarxiv icon

The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations

Add code
Jul 22, 2019
Figure 1 for The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations
Figure 2 for The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations
Figure 3 for The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations
Figure 4 for The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations
Viaarxiv icon