Picture for Anna Hedström

Anna Hedström

From Flexibility to Manipulation: The Slippery Slope of XAI Evaluation

Add code
Dec 07, 2024
Viaarxiv icon

Benchmarking XAI Explanations with Human-Aligned Evaluations

Add code
Nov 04, 2024
Figure 1 for Benchmarking XAI Explanations with Human-Aligned Evaluations
Figure 2 for Benchmarking XAI Explanations with Human-Aligned Evaluations
Figure 3 for Benchmarking XAI Explanations with Human-Aligned Evaluations
Figure 4 for Benchmarking XAI Explanations with Human-Aligned Evaluations
Viaarxiv icon

Quanda: An Interpretability Toolkit for Training Data Attribution Evaluation and Beyond

Add code
Oct 10, 2024
Figure 1 for Quanda: An Interpretability Toolkit for Training Data Attribution Evaluation and Beyond
Figure 2 for Quanda: An Interpretability Toolkit for Training Data Attribution Evaluation and Beyond
Viaarxiv icon

CoSy: Evaluating Textual Explanations of Neurons

Add code
May 30, 2024
Viaarxiv icon

A Fresh Look at Sanity Checks for Saliency Maps

Add code
May 03, 2024
Viaarxiv icon

Sanity Checks Revisited: An Exploration to Repair the Model Parameter Randomisation Test

Add code
Jan 12, 2024
Viaarxiv icon

Explainable AI in Grassland Monitoring: Enhancing Model Performance and Domain Adaptability

Add code
Dec 13, 2023
Viaarxiv icon

Finding the right XAI method -- A Guide for the Evaluation and Ranking of Explainable AI Methods in Climate Science

Add code
Mar 01, 2023
Viaarxiv icon

The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus

Add code
Feb 14, 2023
Viaarxiv icon

Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations

Add code
Feb 14, 2022
Figure 1 for Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations
Figure 2 for Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations
Viaarxiv icon