Picture for Neo Christopher Chung

Neo Christopher Chung

False Sense of Security in Explainable Artificial Intelligence (XAI)

Add code
May 06, 2024
Viaarxiv icon

Class-Discriminative Attention Maps for Vision Transformers

Add code
Dec 04, 2023
Viaarxiv icon

Challenges of Large Language Models for Mental Health Counseling

Add code
Nov 23, 2023
Viaarxiv icon

Integration of Radiomics and Tumor Biomarkers in Interpretable Machine Learning Models

Add code
Mar 20, 2023
Figure 1 for Integration of Radiomics and Tumor Biomarkers in Interpretable Machine Learning Models
Figure 2 for Integration of Radiomics and Tumor Biomarkers in Interpretable Machine Learning Models
Figure 3 for Integration of Radiomics and Tumor Biomarkers in Interpretable Machine Learning Models
Figure 4 for Integration of Radiomics and Tumor Biomarkers in Interpretable Machine Learning Models
Viaarxiv icon

Feature Perturbation Augmentation for Reliable Evaluation of Importance Estimators

Add code
Mar 02, 2023
Figure 1 for Feature Perturbation Augmentation for Reliable Evaluation of Importance Estimators
Figure 2 for Feature Perturbation Augmentation for Reliable Evaluation of Importance Estimators
Figure 3 for Feature Perturbation Augmentation for Reliable Evaluation of Importance Estimators
Figure 4 for Feature Perturbation Augmentation for Reliable Evaluation of Importance Estimators
Viaarxiv icon

Deep Learning Mental Health Dialogue System

Add code
Jan 23, 2023
Viaarxiv icon

Evaluation of importance estimators in deep learning classifiers for Computed Tomography

Add code
Sep 30, 2022
Viaarxiv icon

Evaluation of Interpretability Methods and Perturbation Artifacts in Deep Neural Networks

Add code
Mar 06, 2022
Figure 1 for Evaluation of Interpretability Methods and Perturbation Artifacts in Deep Neural Networks
Figure 2 for Evaluation of Interpretability Methods and Perturbation Artifacts in Deep Neural Networks
Figure 3 for Evaluation of Interpretability Methods and Perturbation Artifacts in Deep Neural Networks
Figure 4 for Evaluation of Interpretability Methods and Perturbation Artifacts in Deep Neural Networks
Viaarxiv icon

Human in the Loop for Machine Creativity

Add code
Oct 07, 2021
Figure 1 for Human in the Loop for Machine Creativity
Viaarxiv icon

Removing Brightness Bias in Rectified Gradients

Add code
Nov 14, 2020
Figure 1 for Removing Brightness Bias in Rectified Gradients
Figure 2 for Removing Brightness Bias in Rectified Gradients
Figure 3 for Removing Brightness Bias in Rectified Gradients
Viaarxiv icon