Picture for Julius Adebayo

Julius Adebayo

Concept Bottleneck Language Models For protein design

Add code
Nov 09, 2024
Viaarxiv icon

How Aligned are Generative Models to Humans in High-Stakes Decision-Making?

Add code
Oct 20, 2024
Figure 1 for How Aligned are Generative Models to Humans in High-Stakes Decision-Making?
Figure 2 for How Aligned are Generative Models to Humans in High-Stakes Decision-Making?
Figure 3 for How Aligned are Generative Models to Humans in High-Stakes Decision-Making?
Figure 4 for How Aligned are Generative Models to Humans in High-Stakes Decision-Making?
Viaarxiv icon

Error Discovery by Clustering Influence Embeddings

Add code
Dec 07, 2023
Viaarxiv icon

Quantifying and mitigating the impact of label errors on model disparity metrics

Add code
Oct 04, 2023
Viaarxiv icon

Post hoc Explanations may be Ineffective for Detecting Unknown Spurious Correlation

Add code
Dec 09, 2022
Figure 1 for Post hoc Explanations may be Ineffective for Detecting Unknown Spurious Correlation
Figure 2 for Post hoc Explanations may be Ineffective for Detecting Unknown Spurious Correlation
Figure 3 for Post hoc Explanations may be Ineffective for Detecting Unknown Spurious Correlation
Figure 4 for Post hoc Explanations may be Ineffective for Detecting Unknown Spurious Correlation
Viaarxiv icon

Debugging Tests for Model Explanations

Add code
Nov 10, 2020
Figure 1 for Debugging Tests for Model Explanations
Figure 2 for Debugging Tests for Model Explanations
Figure 3 for Debugging Tests for Model Explanations
Figure 4 for Debugging Tests for Model Explanations
Viaarxiv icon

Assessing the (Un)Trustworthiness of Saliency Maps for Localizing Abnormalities in Medical Imaging

Add code
Aug 06, 2020
Viaarxiv icon

Explaining Explanations to Society

Add code
Jan 19, 2019
Figure 1 for Explaining Explanations to Society
Viaarxiv icon

Sanity Checks for Saliency Maps

Add code
Oct 28, 2018
Figure 1 for Sanity Checks for Saliency Maps
Figure 2 for Sanity Checks for Saliency Maps
Figure 3 for Sanity Checks for Saliency Maps
Figure 4 for Sanity Checks for Saliency Maps
Viaarxiv icon

Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values

Add code
Oct 08, 2018
Figure 1 for Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values
Figure 2 for Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values
Figure 3 for Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values
Figure 4 for Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values
Viaarxiv icon