Picture for Eve Fleisig

Eve Fleisig

Accurate and Data-Efficient Toxicity Prediction when Annotators Disagree

Add code
Oct 16, 2024
Viaarxiv icon

ADVSCORE: A Metric for the Evaluation and Creation of Adversarial Benchmarks

Add code
Jun 24, 2024
Viaarxiv icon

Linguistic Bias in ChatGPT: Language Models Reinforce Dialect Discrimination

Add code
Jun 13, 2024
Figure 1 for Linguistic Bias in ChatGPT: Language Models Reinforce Dialect Discrimination
Figure 2 for Linguistic Bias in ChatGPT: Language Models Reinforce Dialect Discrimination
Figure 3 for Linguistic Bias in ChatGPT: Language Models Reinforce Dialect Discrimination
Figure 4 for Linguistic Bias in ChatGPT: Language Models Reinforce Dialect Discrimination
Viaarxiv icon

Standard Language Ideology in AI-Generated Language

Add code
Jun 13, 2024
Viaarxiv icon

The Perspectivist Paradigm Shift: Assumptions and Challenges of Capturing Human Labels

Add code
May 09, 2024
Viaarxiv icon

Mapping Social Choice Theory to RLHF

Add code
Apr 19, 2024
Viaarxiv icon

Incorporating Worker Perspectives into MTurk Annotation Practices for NLP

Add code
Nov 16, 2023
Viaarxiv icon

First Tragedy, then Parse: History Repeats Itself in the New Era of Large Language Models

Add code
Nov 08, 2023
Viaarxiv icon

When the Majority is Wrong: Modeling Annotator Disagreement for Subjective Tasks

Add code
May 24, 2023
Figure 1 for When the Majority is Wrong: Modeling Annotator Disagreement for Subjective Tasks
Figure 2 for When the Majority is Wrong: Modeling Annotator Disagreement for Subjective Tasks
Figure 3 for When the Majority is Wrong: Modeling Annotator Disagreement for Subjective Tasks
Figure 4 for When the Majority is Wrong: Modeling Annotator Disagreement for Subjective Tasks
Viaarxiv icon

Centering the Margins: Outlier-Based Identification of Harmed Populations in Toxicity Detection

Add code
May 24, 2023
Viaarxiv icon