Picture for Thomas Wiegand

Thomas Wiegand

Ensuring Medical AI Safety: Explainable AI-Driven Detection and Mitigation of Spurious Model Behavior and Associated Data

Add code
Jan 23, 2025
Figure 1 for Ensuring Medical AI Safety: Explainable AI-Driven Detection and Mitigation of Spurious Model Behavior and Associated Data
Figure 2 for Ensuring Medical AI Safety: Explainable AI-Driven Detection and Mitigation of Spurious Model Behavior and Associated Data
Figure 3 for Ensuring Medical AI Safety: Explainable AI-Driven Detection and Mitigation of Spurious Model Behavior and Associated Data
Figure 4 for Ensuring Medical AI Safety: Explainable AI-Driven Detection and Mitigation of Spurious Model Behavior and Associated Data
Viaarxiv icon

Mechanistic understanding and validation of large AI models with SemanticLens

Add code
Jan 09, 2025
Viaarxiv icon

Opportunities and limitations of explaining quantum machine learning

Add code
Dec 19, 2024
Figure 1 for Opportunities and limitations of explaining quantum machine learning
Figure 2 for Opportunities and limitations of explaining quantum machine learning
Figure 3 for Opportunities and limitations of explaining quantum machine learning
Figure 4 for Opportunities and limitations of explaining quantum machine learning
Viaarxiv icon

Quanda: An Interpretability Toolkit for Training Data Attribution Evaluation and Beyond

Add code
Oct 10, 2024
Figure 1 for Quanda: An Interpretability Toolkit for Training Data Attribution Evaluation and Beyond
Figure 2 for Quanda: An Interpretability Toolkit for Training Data Attribution Evaluation and Beyond
Viaarxiv icon

PINNfluence: Influence Functions for Physics-Informed Neural Networks

Add code
Sep 13, 2024
Viaarxiv icon

Pruning By Explaining Revisited: Optimizing Attribution Methods to Prune CNNs and Transformers

Add code
Aug 22, 2024
Figure 1 for Pruning By Explaining Revisited: Optimizing Attribution Methods to Prune CNNs and Transformers
Figure 2 for Pruning By Explaining Revisited: Optimizing Attribution Methods to Prune CNNs and Transformers
Figure 3 for Pruning By Explaining Revisited: Optimizing Attribution Methods to Prune CNNs and Transformers
Figure 4 for Pruning By Explaining Revisited: Optimizing Attribution Methods to Prune CNNs and Transformers
Viaarxiv icon

DualView: Data Attribution from the Dual Perspective

Add code
Feb 19, 2024
Viaarxiv icon

AttnLRP: Attention-Aware Layer-wise Relevance Propagation for Transformers

Add code
Feb 08, 2024
Figure 1 for AttnLRP: Attention-Aware Layer-wise Relevance Propagation for Transformers
Figure 2 for AttnLRP: Attention-Aware Layer-wise Relevance Propagation for Transformers
Figure 3 for AttnLRP: Attention-Aware Layer-wise Relevance Propagation for Transformers
Figure 4 for AttnLRP: Attention-Aware Layer-wise Relevance Propagation for Transformers
Viaarxiv icon

Layer-wise Feedback Propagation

Add code
Aug 23, 2023
Figure 1 for Layer-wise Feedback Propagation
Figure 2 for Layer-wise Feedback Propagation
Figure 3 for Layer-wise Feedback Propagation
Figure 4 for Layer-wise Feedback Propagation
Viaarxiv icon

Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations

Add code
Nov 21, 2022
Figure 1 for Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations
Figure 2 for Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations
Figure 3 for Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations
Figure 4 for Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations
Viaarxiv icon