Picture for Jennifer Wortman Vaughan

Jennifer Wortman Vaughan

Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice

Add code
Dec 09, 2024
Viaarxiv icon

"I'm Not Sure, But": Examining the Impact of Large Language Models' Uncertainty Expression on User Reliance and Trust

Add code
May 01, 2024
Viaarxiv icon

Open Datasheets: Machine-readable Documentation for Open Datasets and Responsible AI Assessments

Add code
Dec 11, 2023
Viaarxiv icon

Has the Machine Learning Review Process Become More Arbitrary as the Field Has Grown? The NeurIPS 2021 Consistency Experiment

Add code
Jun 05, 2023
Figure 1 for Has the Machine Learning Review Process Become More Arbitrary as the Field Has Grown? The NeurIPS 2021 Consistency Experiment
Figure 2 for Has the Machine Learning Review Process Become More Arbitrary as the Field Has Grown? The NeurIPS 2021 Consistency Experiment
Figure 3 for Has the Machine Learning Review Process Become More Arbitrary as the Field Has Grown? The NeurIPS 2021 Consistency Experiment
Figure 4 for Has the Machine Learning Review Process Become More Arbitrary as the Field Has Grown? The NeurIPS 2021 Consistency Experiment
Viaarxiv icon

AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap

Add code
Jun 02, 2023
Viaarxiv icon

GAM Coach: Towards Interactive and User-centered Algorithmic Recourse

Add code
Mar 01, 2023
Viaarxiv icon

Designerly Understanding: Information Needs for Model Transparency to Support Design Ideation for AI-Powered User Experience

Add code
Feb 21, 2023
Viaarxiv icon

Generation Probabilities Are Not Enough: Exploring the Effectiveness of Uncertainty Highlighting in AI-Powered Code Completions

Add code
Feb 14, 2023
Viaarxiv icon

Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations

Add code
Jan 18, 2023
Viaarxiv icon

How do Authors' Perceptions of their Papers Compare with Co-authors' Perceptions and Peer-review Decisions?

Add code
Nov 22, 2022
Viaarxiv icon