Picture for Anna Hedström

Anna Hedström

Benchmarking XAI Explanations with Human-Aligned Evaluations

Add code
Nov 04, 2024
Viaarxiv icon

Quanda: An Interpretability Toolkit for Training Data Attribution Evaluation and Beyond

Add code
Oct 10, 2024
Figure 1 for Quanda: An Interpretability Toolkit for Training Data Attribution Evaluation and Beyond
Figure 2 for Quanda: An Interpretability Toolkit for Training Data Attribution Evaluation and Beyond
Viaarxiv icon

CoSy: Evaluating Textual Explanations of Neurons

Add code
May 30, 2024
Viaarxiv icon

A Fresh Look at Sanity Checks for Saliency Maps

Add code
May 03, 2024
Viaarxiv icon

Sanity Checks Revisited: An Exploration to Repair the Model Parameter Randomisation Test

Add code
Jan 12, 2024
Viaarxiv icon

Explainable AI in Grassland Monitoring: Enhancing Model Performance and Domain Adaptability

Add code
Dec 13, 2023
Viaarxiv icon

Finding the right XAI method -- A Guide for the Evaluation and Ranking of Explainable AI Methods in Climate Science

Add code
Mar 01, 2023
Viaarxiv icon

The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus

Add code
Feb 14, 2023
Viaarxiv icon

Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations

Add code
Feb 14, 2022
Figure 1 for Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations
Figure 2 for Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations
Viaarxiv icon

NoiseGrad: enhancing explanations by introducing stochasticity to model weights

Add code
Jun 18, 2021
Figure 1 for NoiseGrad: enhancing explanations by introducing stochasticity to model weights
Figure 2 for NoiseGrad: enhancing explanations by introducing stochasticity to model weights
Figure 3 for NoiseGrad: enhancing explanations by introducing stochasticity to model weights
Figure 4 for NoiseGrad: enhancing explanations by introducing stochasticity to model weights
Viaarxiv icon