Picture for Neil Jethani

Neil Jethani

A dynamic risk score for early prediction of cardiogenic shock using machine learning

Add code
Mar 28, 2023
Figure 1 for A dynamic risk score for early prediction of cardiogenic shock using machine learning
Figure 2 for A dynamic risk score for early prediction of cardiogenic shock using machine learning
Figure 3 for A dynamic risk score for early prediction of cardiogenic shock using machine learning
Figure 4 for A dynamic risk score for early prediction of cardiogenic shock using machine learning
Viaarxiv icon

Don't be fooled: label leakage in explanation methods and the importance of their quantitative evaluation

Add code
Feb 24, 2023
Viaarxiv icon

New-Onset Diabetes Assessment Using Artificial Intelligence-Enhanced Electrocardiography

Add code
May 05, 2022
Figure 1 for New-Onset Diabetes Assessment Using Artificial Intelligence-Enhanced Electrocardiography
Figure 2 for New-Onset Diabetes Assessment Using Artificial Intelligence-Enhanced Electrocardiography
Figure 3 for New-Onset Diabetes Assessment Using Artificial Intelligence-Enhanced Electrocardiography
Figure 4 for New-Onset Diabetes Assessment Using Artificial Intelligence-Enhanced Electrocardiography
Viaarxiv icon

FastSHAP: Real-Time Shapley Value Estimation

Add code
Jul 15, 2021
Figure 1 for FastSHAP: Real-Time Shapley Value Estimation
Figure 2 for FastSHAP: Real-Time Shapley Value Estimation
Figure 3 for FastSHAP: Real-Time Shapley Value Estimation
Figure 4 for FastSHAP: Real-Time Shapley Value Estimation
Viaarxiv icon

Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their Interpretations

Add code
Mar 02, 2021
Figure 1 for Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their Interpretations
Figure 2 for Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their Interpretations
Figure 3 for Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their Interpretations
Figure 4 for Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their Interpretations
Viaarxiv icon