Picture for Ruth Fong

Ruth Fong

Humans, AI, and Context: Understanding End-Users' Trust in a Real-World Computer Vision Application

Add code
May 15, 2023
Viaarxiv icon

UFO: A unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations for CNNs

Add code
Mar 27, 2023
Viaarxiv icon

Interactive Visual Feature Search

Add code
Nov 28, 2022
Viaarxiv icon

Improving Fine-Grain Segmentation via Interpretable Modifications: A Case Study in Fossil Segmentation

Add code
Oct 08, 2022
Figure 1 for Improving Fine-Grain Segmentation via Interpretable Modifications: A Case Study in Fossil Segmentation
Figure 2 for Improving Fine-Grain Segmentation via Interpretable Modifications: A Case Study in Fossil Segmentation
Figure 3 for Improving Fine-Grain Segmentation via Interpretable Modifications: A Case Study in Fossil Segmentation
Figure 4 for Improving Fine-Grain Segmentation via Interpretable Modifications: A Case Study in Fossil Segmentation
Viaarxiv icon

"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction

Add code
Oct 02, 2022
Figure 1 for "Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Figure 2 for "Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Figure 3 for "Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Figure 4 for "Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Viaarxiv icon

Overlooked factors in concept-based explanations: Dataset choice, concept salience, and human capability

Add code
Jul 20, 2022
Figure 1 for Overlooked factors in concept-based explanations: Dataset choice, concept salience, and human capability
Figure 2 for Overlooked factors in concept-based explanations: Dataset choice, concept salience, and human capability
Figure 3 for Overlooked factors in concept-based explanations: Dataset choice, concept salience, and human capability
Figure 4 for Overlooked factors in concept-based explanations: Dataset choice, concept salience, and human capability
Viaarxiv icon

Gender Artifacts in Visual Datasets

Add code
Jun 18, 2022
Figure 1 for Gender Artifacts in Visual Datasets
Figure 2 for Gender Artifacts in Visual Datasets
Figure 3 for Gender Artifacts in Visual Datasets
Figure 4 for Gender Artifacts in Visual Datasets
Viaarxiv icon

ELUDE: Generating interpretable explanations via a decomposition into labelled and unlabelled features

Add code
Jun 16, 2022
Figure 1 for ELUDE: Generating interpretable explanations via a decomposition into labelled and unlabelled features
Figure 2 for ELUDE: Generating interpretable explanations via a decomposition into labelled and unlabelled features
Figure 3 for ELUDE: Generating interpretable explanations via a decomposition into labelled and unlabelled features
Figure 4 for ELUDE: Generating interpretable explanations via a decomposition into labelled and unlabelled features
Viaarxiv icon

HIVE: Evaluating the Human Interpretability of Visual Explanations

Add code
Jan 10, 2022
Figure 1 for HIVE: Evaluating the Human Interpretability of Visual Explanations
Figure 2 for HIVE: Evaluating the Human Interpretability of Visual Explanations
Figure 3 for HIVE: Evaluating the Human Interpretability of Visual Explanations
Figure 4 for HIVE: Evaluating the Human Interpretability of Visual Explanations
Viaarxiv icon

Debiasing Convolutional Neural Networks via Meta Orthogonalization

Add code
Nov 15, 2020
Figure 1 for Debiasing Convolutional Neural Networks via Meta Orthogonalization
Figure 2 for Debiasing Convolutional Neural Networks via Meta Orthogonalization
Figure 3 for Debiasing Convolutional Neural Networks via Meta Orthogonalization
Figure 4 for Debiasing Convolutional Neural Networks via Meta Orthogonalization
Viaarxiv icon