Picture for Sunnie S. Y. Kim

Sunnie S. Y. Kim

"I'm Not Sure, But": Examining the Impact of Large Language Models' Uncertainty Expression on User Reliance and Trust

Add code
May 01, 2024
Viaarxiv icon

Allowing humans to interactively guide machines where to look does not always improve human-AI team's classification accuracy

Add code
Apr 14, 2024
Viaarxiv icon

WiCV@CVPR2023: The Eleventh Women In Computer Vision Workshop at the Annual CVPR Conference

Add code
Sep 22, 2023
Viaarxiv icon

Humans, AI, and Context: Understanding End-Users' Trust in a Real-World Computer Vision Application

Add code
May 15, 2023
Viaarxiv icon

UFO: A unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations for CNNs

Add code
Mar 27, 2023
Viaarxiv icon

"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction

Add code
Oct 02, 2022
Figure 1 for "Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Figure 2 for "Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Figure 3 for "Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Figure 4 for "Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Viaarxiv icon

Overlooked factors in concept-based explanations: Dataset choice, concept salience, and human capability

Add code
Jul 20, 2022
Figure 1 for Overlooked factors in concept-based explanations: Dataset choice, concept salience, and human capability
Figure 2 for Overlooked factors in concept-based explanations: Dataset choice, concept salience, and human capability
Figure 3 for Overlooked factors in concept-based explanations: Dataset choice, concept salience, and human capability
Figure 4 for Overlooked factors in concept-based explanations: Dataset choice, concept salience, and human capability
Viaarxiv icon

ELUDE: Generating interpretable explanations via a decomposition into labelled and unlabelled features

Add code
Jun 16, 2022
Figure 1 for ELUDE: Generating interpretable explanations via a decomposition into labelled and unlabelled features
Figure 2 for ELUDE: Generating interpretable explanations via a decomposition into labelled and unlabelled features
Figure 3 for ELUDE: Generating interpretable explanations via a decomposition into labelled and unlabelled features
Figure 4 for ELUDE: Generating interpretable explanations via a decomposition into labelled and unlabelled features
Viaarxiv icon

HIVE: Evaluating the Human Interpretability of Visual Explanations

Add code
Jan 10, 2022
Figure 1 for HIVE: Evaluating the Human Interpretability of Visual Explanations
Figure 2 for HIVE: Evaluating the Human Interpretability of Visual Explanations
Figure 3 for HIVE: Evaluating the Human Interpretability of Visual Explanations
Figure 4 for HIVE: Evaluating the Human Interpretability of Visual Explanations
Viaarxiv icon

Cleaning and Structuring the Label Space of the iMet Collection 2020

Add code
Jun 01, 2021
Figure 1 for Cleaning and Structuring the Label Space of the iMet Collection 2020
Figure 2 for Cleaning and Structuring the Label Space of the iMet Collection 2020
Figure 3 for Cleaning and Structuring the Label Space of the iMet Collection 2020
Figure 4 for Cleaning and Structuring the Label Space of the iMet Collection 2020
Viaarxiv icon