Picture for Valerie Chen

Valerie Chen

The RealHumanEval: Evaluating Large Language Models' Abilities to Support Programmers

Add code
Apr 03, 2024
Viaarxiv icon

Do LLMs exhibit human-like response biases? A case study in survey design

Add code
Nov 07, 2023
Viaarxiv icon

AdvisingNets: Learning to Distinguish Correct and Wrong Classifications via Nearest-Neighbor Explanations

Add code
Aug 25, 2023
Viaarxiv icon

FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines

Add code
Jul 28, 2023
Viaarxiv icon

Learning Personalized Decision Support Policies

Add code
Apr 13, 2023
Viaarxiv icon

Assisting Human Decisions in Document Matching

Add code
Feb 16, 2023
Viaarxiv icon

A Case Study on Designing Evaluations of ML Explanations with Simulated User Studies

Add code
Feb 15, 2023
Viaarxiv icon

Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations

Add code
Jan 18, 2023
Viaarxiv icon

On the Importance of Application-Grounded Experimental Design for Evaluating Explainable ML Methods

Add code
Jun 30, 2022
Figure 1 for On the Importance of Application-Grounded Experimental Design for Evaluating Explainable ML Methods
Figure 2 for On the Importance of Application-Grounded Experimental Design for Evaluating Explainable ML Methods
Figure 3 for On the Importance of Application-Grounded Experimental Design for Evaluating Explainable ML Methods
Figure 4 for On the Importance of Application-Grounded Experimental Design for Evaluating Explainable ML Methods
Viaarxiv icon

Use-Case-Grounded Simulations for Explanation Evaluation

Add code
Jun 05, 2022
Figure 1 for Use-Case-Grounded Simulations for Explanation Evaluation
Figure 2 for Use-Case-Grounded Simulations for Explanation Evaluation
Figure 3 for Use-Case-Grounded Simulations for Explanation Evaluation
Figure 4 for Use-Case-Grounded Simulations for Explanation Evaluation
Viaarxiv icon