Picture for Arvind Satyanarayan

Arvind Satyanarayan

Toward Cultural Interpretability: A Linguistic Anthropological Framework for Describing and Evaluating Large Language Models (LLMs)

Add code
Nov 07, 2024
Viaarxiv icon

Abstraction Alignment: Comparing Model and Human Conceptual Relationships

Add code
Jul 17, 2024
Viaarxiv icon

What is a Fair Diffusion Model? Designing Generative Text-To-Image Models to Incorporate Various Worldviews

Add code
Sep 18, 2023
Viaarxiv icon

VisText: A Benchmark for Semantically Rich Chart Captioning

Add code
Jun 28, 2023
Viaarxiv icon

Beyond Faithfulness: A Framework to Characterize and Compare Saliency Methods

Add code
Jun 07, 2022
Figure 1 for Beyond Faithfulness: A Framework to Characterize and Compare Saliency Methods
Figure 2 for Beyond Faithfulness: A Framework to Characterize and Compare Saliency Methods
Figure 3 for Beyond Faithfulness: A Framework to Characterize and Compare Saliency Methods
Figure 4 for Beyond Faithfulness: A Framework to Characterize and Compare Saliency Methods
Viaarxiv icon

Teaching Humans When To Defer to a Classifier via Examplars

Add code
Nov 22, 2021
Figure 1 for Teaching Humans When To Defer to a Classifier via Examplars
Figure 2 for Teaching Humans When To Defer to a Classifier via Examplars
Figure 3 for Teaching Humans When To Defer to a Classifier via Examplars
Figure 4 for Teaching Humans When To Defer to a Classifier via Examplars
Viaarxiv icon

LMdiff: A Visual Diff Tool to Compare Language Models

Add code
Nov 02, 2021
Figure 1 for LMdiff: A Visual Diff Tool to Compare Language Models
Figure 2 for LMdiff: A Visual Diff Tool to Compare Language Models
Figure 3 for LMdiff: A Visual Diff Tool to Compare Language Models
Figure 4 for LMdiff: A Visual Diff Tool to Compare Language Models
Viaarxiv icon

Accessible Visualization via Natural Language Descriptions: A Four-Level Model of Semantic Content

Add code
Oct 08, 2021
Figure 1 for Accessible Visualization via Natural Language Descriptions: A Four-Level Model of Semantic Content
Figure 2 for Accessible Visualization via Natural Language Descriptions: A Four-Level Model of Semantic Content
Figure 3 for Accessible Visualization via Natural Language Descriptions: A Four-Level Model of Semantic Content
Figure 4 for Accessible Visualization via Natural Language Descriptions: A Four-Level Model of Semantic Content
Viaarxiv icon

Shared Interest: Large-Scale Visual Analysis of Model Behavior by Measuring Human-AI Alignment

Add code
Jul 20, 2021
Figure 1 for Shared Interest: Large-Scale Visual Analysis of Model Behavior by Measuring Human-AI Alignment
Figure 2 for Shared Interest: Large-Scale Visual Analysis of Model Behavior by Measuring Human-AI Alignment
Figure 3 for Shared Interest: Large-Scale Visual Analysis of Model Behavior by Measuring Human-AI Alignment
Viaarxiv icon

Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs

Add code
Feb 17, 2021
Figure 1 for Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs
Figure 2 for Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs
Figure 3 for Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs
Figure 4 for Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs
Viaarxiv icon