Picture for Leilani H. Gilpin

Leilani H. Gilpin

Right this way: Can VLMs Guide Us to See More to Answer Questions?

Add code
Nov 01, 2024
Viaarxiv icon

Towards a fuller understanding of neurons with Clustered Compositional Explanations

Add code
Oct 27, 2023
Viaarxiv icon

Can Large Language Models Explain Themselves? A Study of LLM-Generated Self-Explanations

Add code
Oct 17, 2023
Figure 1 for Can Large Language Models Explain Themselves? A Study of LLM-Generated Self-Explanations
Figure 2 for Can Large Language Models Explain Themselves? A Study of LLM-Generated Self-Explanations
Figure 3 for Can Large Language Models Explain Themselves? A Study of LLM-Generated Self-Explanations
Figure 4 for Can Large Language Models Explain Themselves? A Study of LLM-Generated Self-Explanations
Viaarxiv icon

Convolutional Neural Network Model for Diabetic Retinopathy Feature Extraction and Classification

Add code
Oct 16, 2023
Viaarxiv icon

Anticipatory Thinking Challenges in Open Worlds: Risk Management

Add code
Jun 22, 2023
Viaarxiv icon

"Explanation" is Not a Technical Term: The Problem of Ambiguity in XAI

Add code
Jun 27, 2022
Figure 1 for "Explanation" is Not a Technical Term: The Problem of Ambiguity in XAI
Figure 2 for "Explanation" is Not a Technical Term: The Problem of Ambiguity in XAI
Figure 3 for "Explanation" is Not a Technical Term: The Problem of Ambiguity in XAI
Figure 4 for "Explanation" is Not a Technical Term: The Problem of Ambiguity in XAI
Viaarxiv icon

Explaining Explanations to Society

Add code
Jan 19, 2019
Figure 1 for Explaining Explanations to Society
Viaarxiv icon

Explaining Explanations: An Approach to Evaluating Interpretability of Machine Learning

Add code
Jun 04, 2018
Figure 1 for Explaining Explanations: An Approach to Evaluating Interpretability of Machine Learning
Figure 2 for Explaining Explanations: An Approach to Evaluating Interpretability of Machine Learning
Viaarxiv icon