Picture for Lennart Brocki

Lennart Brocki

False Sense of Security in Explainable Artificial Intelligence (XAI)

Add code
May 06, 2024
Viaarxiv icon

Class-Discriminative Attention Maps for Vision Transformers

Add code
Dec 04, 2023
Viaarxiv icon

Challenges of Large Language Models for Mental Health Counseling

Add code
Nov 23, 2023
Viaarxiv icon

Integration of Radiomics and Tumor Biomarkers in Interpretable Machine Learning Models

Add code
Mar 20, 2023
Figure 1 for Integration of Radiomics and Tumor Biomarkers in Interpretable Machine Learning Models
Figure 2 for Integration of Radiomics and Tumor Biomarkers in Interpretable Machine Learning Models
Figure 3 for Integration of Radiomics and Tumor Biomarkers in Interpretable Machine Learning Models
Figure 4 for Integration of Radiomics and Tumor Biomarkers in Interpretable Machine Learning Models
Viaarxiv icon

Feature Perturbation Augmentation for Reliable Evaluation of Importance Estimators

Add code
Mar 02, 2023
Figure 1 for Feature Perturbation Augmentation for Reliable Evaluation of Importance Estimators
Figure 2 for Feature Perturbation Augmentation for Reliable Evaluation of Importance Estimators
Figure 3 for Feature Perturbation Augmentation for Reliable Evaluation of Importance Estimators
Figure 4 for Feature Perturbation Augmentation for Reliable Evaluation of Importance Estimators
Viaarxiv icon

Deep Learning Mental Health Dialogue System

Add code
Jan 23, 2023
Viaarxiv icon

Evaluation of importance estimators in deep learning classifiers for Computed Tomography

Add code
Sep 30, 2022
Viaarxiv icon

Evaluation of Interpretability Methods and Perturbation Artifacts in Deep Neural Networks

Add code
Mar 06, 2022
Figure 1 for Evaluation of Interpretability Methods and Perturbation Artifacts in Deep Neural Networks
Figure 2 for Evaluation of Interpretability Methods and Perturbation Artifacts in Deep Neural Networks
Figure 3 for Evaluation of Interpretability Methods and Perturbation Artifacts in Deep Neural Networks
Figure 4 for Evaluation of Interpretability Methods and Perturbation Artifacts in Deep Neural Networks
Viaarxiv icon

Removing Brightness Bias in Rectified Gradients

Add code
Nov 14, 2020
Figure 1 for Removing Brightness Bias in Rectified Gradients
Figure 2 for Removing Brightness Bias in Rectified Gradients
Figure 3 for Removing Brightness Bias in Rectified Gradients
Viaarxiv icon

Concept Saliency Maps to Visualize Relevant Features in Deep Generative Models

Add code
Oct 29, 2019
Figure 1 for Concept Saliency Maps to Visualize Relevant Features in Deep Generative Models
Figure 2 for Concept Saliency Maps to Visualize Relevant Features in Deep Generative Models
Figure 3 for Concept Saliency Maps to Visualize Relevant Features in Deep Generative Models
Figure 4 for Concept Saliency Maps to Visualize Relevant Features in Deep Generative Models
Viaarxiv icon