Picture for Leander Weber

Leander Weber

A Fresh Look at Sanity Checks for Saliency Maps

Add code
May 03, 2024
Viaarxiv icon

Sanity Checks Revisited: An Exploration to Repair the Model Parameter Randomisation Test

Add code
Jan 12, 2024
Viaarxiv icon

Layer-wise Feedback Propagation

Add code
Aug 23, 2023
Figure 1 for Layer-wise Feedback Propagation
Figure 2 for Layer-wise Feedback Propagation
Figure 3 for Layer-wise Feedback Propagation
Figure 4 for Layer-wise Feedback Propagation
Viaarxiv icon

Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations

Add code
Nov 22, 2022
Viaarxiv icon

Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI

Add code
May 11, 2022
Figure 1 for Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI
Figure 2 for Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI
Figure 3 for Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI
Figure 4 for Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI
Viaarxiv icon

Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement

Add code
Mar 15, 2022
Figure 1 for Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement
Figure 2 for Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement
Figure 3 for Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement
Figure 4 for Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement
Viaarxiv icon

Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations

Add code
Feb 14, 2022
Figure 1 for Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations
Figure 2 for Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations
Viaarxiv icon

Measurably Stronger Explanation Reliability via Model Canonization

Add code
Feb 14, 2022
Viaarxiv icon

PatClArC: Using Pattern Concept Activation Vectors for Noise-Robust Model Debugging

Add code
Feb 07, 2022
Viaarxiv icon

Understanding Integrated Gradients with SmoothTaylor for Deep Neural Network Attribution

Add code
Apr 22, 2020
Figure 1 for Understanding Integrated Gradients with SmoothTaylor for Deep Neural Network Attribution
Figure 2 for Understanding Integrated Gradients with SmoothTaylor for Deep Neural Network Attribution
Figure 3 for Understanding Integrated Gradients with SmoothTaylor for Deep Neural Network Attribution
Figure 4 for Understanding Integrated Gradients with SmoothTaylor for Deep Neural Network Attribution
Viaarxiv icon