Picture for Sharon Levy

Sharon Levy

LLMs are Biased Teachers: Evaluating LLM Bias in Personalized Education

Add code
Oct 17, 2024
Viaarxiv icon

Gender Bias in Decision-Making with Large Language Models: A Study of Relationship Conflicts

Add code
Oct 14, 2024
Figure 1 for Gender Bias in Decision-Making with Large Language Models: A Study of Relationship Conflicts
Figure 2 for Gender Bias in Decision-Making with Large Language Models: A Study of Relationship Conflicts
Figure 3 for Gender Bias in Decision-Making with Large Language Models: A Study of Relationship Conflicts
Figure 4 for Gender Bias in Decision-Making with Large Language Models: A Study of Relationship Conflicts
Viaarxiv icon

Lost in Translation? Translation Errors and Challenges for Fair Assessment of Text-to-Image Models on Multilingual Concepts

Add code
Mar 17, 2024
Viaarxiv icon

Evaluating Biases in Context-Dependent Health Questions

Add code
Mar 07, 2024
Viaarxiv icon

ASSERT: Automated Safety Scenario Red Teaming for Evaluating the Robustness of Large Language Models

Add code
Oct 14, 2023
Viaarxiv icon

Comparing Biases and the Impact of Multilingual Training across Multiple Languages

Add code
May 18, 2023
Viaarxiv icon

Foveate, Attribute, and Rationalize: Towards Safe and Trustworthy AI

Add code
Dec 19, 2022
Viaarxiv icon

SafeText: A Benchmark for Exploring Physical Safety in Language Models

Add code
Oct 18, 2022
Figure 1 for SafeText: A Benchmark for Exploring Physical Safety in Language Models
Figure 2 for SafeText: A Benchmark for Exploring Physical Safety in Language Models
Figure 3 for SafeText: A Benchmark for Exploring Physical Safety in Language Models
Figure 4 for SafeText: A Benchmark for Exploring Physical Safety in Language Models
Viaarxiv icon

Mitigating Covertly Unsafe Text within Natural Language Systems

Add code
Oct 17, 2022
Figure 1 for Mitigating Covertly Unsafe Text within Natural Language Systems
Figure 2 for Mitigating Covertly Unsafe Text within Natural Language Systems
Figure 3 for Mitigating Covertly Unsafe Text within Natural Language Systems
Figure 4 for Mitigating Covertly Unsafe Text within Natural Language Systems
Viaarxiv icon

Towards Understanding Gender-Seniority Compound Bias in Natural Language Generation

Add code
May 19, 2022
Figure 1 for Towards Understanding Gender-Seniority Compound Bias in Natural Language Generation
Figure 2 for Towards Understanding Gender-Seniority Compound Bias in Natural Language Generation
Figure 3 for Towards Understanding Gender-Seniority Compound Bias in Natural Language Generation
Figure 4 for Towards Understanding Gender-Seniority Compound Bias in Natural Language Generation
Viaarxiv icon