Picture for Aylin Caliskan

Aylin Caliskan

Intrinsic Bias is Predicted by Pretraining Data and Correlates with Downstream Performance in Vision-Language Encoders

Add code
Feb 11, 2025
Viaarxiv icon

A Taxonomy of Stereotype Content in Large Language Models

Add code
Jul 31, 2024
Viaarxiv icon

Gender, Race, and Intersectional Bias in Resume Screening via Language Model Retrieval

Add code
Jul 29, 2024
Figure 1 for Gender, Race, and Intersectional Bias in Resume Screening via Language Model Retrieval
Figure 2 for Gender, Race, and Intersectional Bias in Resume Screening via Language Model Retrieval
Figure 3 for Gender, Race, and Intersectional Bias in Resume Screening via Language Model Retrieval
Figure 4 for Gender, Race, and Intersectional Bias in Resume Screening via Language Model Retrieval
Viaarxiv icon

Do Generative AI Models Output Harm while Representing Non-Western Cultures: Evidence from A Community-Centered Approach

Add code
Jul 24, 2024
Figure 1 for Do Generative AI Models Output Harm while Representing Non-Western Cultures: Evidence from A Community-Centered Approach
Figure 2 for Do Generative AI Models Output Harm while Representing Non-Western Cultures: Evidence from A Community-Centered Approach
Figure 3 for Do Generative AI Models Output Harm while Representing Non-Western Cultures: Evidence from A Community-Centered Approach
Figure 4 for Do Generative AI Models Output Harm while Representing Non-Western Cultures: Evidence from A Community-Centered Approach
Viaarxiv icon

Breaking Bias, Building Bridges: Evaluation and Mitigation of Social Biases in LLMs via Contact Hypothesis

Add code
Jul 02, 2024
Figure 1 for Breaking Bias, Building Bridges: Evaluation and Mitigation of Social Biases in LLMs via Contact Hypothesis
Figure 2 for Breaking Bias, Building Bridges: Evaluation and Mitigation of Social Biases in LLMs via Contact Hypothesis
Figure 3 for Breaking Bias, Building Bridges: Evaluation and Mitigation of Social Biases in LLMs via Contact Hypothesis
Figure 4 for Breaking Bias, Building Bridges: Evaluation and Mitigation of Social Biases in LLMs via Contact Hypothesis
Viaarxiv icon

BiasDora: Exploring Hidden Biased Associations in Vision-Language Models

Add code
Jul 02, 2024
Figure 1 for BiasDora: Exploring Hidden Biased Associations in Vision-Language Models
Figure 2 for BiasDora: Exploring Hidden Biased Associations in Vision-Language Models
Figure 3 for BiasDora: Exploring Hidden Biased Associations in Vision-Language Models
Figure 4 for BiasDora: Exploring Hidden Biased Associations in Vision-Language Models
Viaarxiv icon

ChatGPT as Research Scientist: Probing GPT's Capabilities as a Research Librarian, Research Ethicist, Data Generator and Data Predictor

Add code
Jun 20, 2024
Viaarxiv icon

'Person' == Light-skinned, Western Man, and Sexualization of Women of Color: Stereotypes in Stable Diffusion

Add code
Nov 10, 2023
Viaarxiv icon

Pre-trained Speech Processing Models Contain Human-Like Biases that Propagate to Speech Emotion Recognition

Add code
Oct 29, 2023
Figure 1 for Pre-trained Speech Processing Models Contain Human-Like Biases that Propagate to Speech Emotion Recognition
Figure 2 for Pre-trained Speech Processing Models Contain Human-Like Biases that Propagate to Speech Emotion Recognition
Figure 3 for Pre-trained Speech Processing Models Contain Human-Like Biases that Propagate to Speech Emotion Recognition
Figure 4 for Pre-trained Speech Processing Models Contain Human-Like Biases that Propagate to Speech Emotion Recognition
Viaarxiv icon

Is the U.S. Legal System Ready for AI's Challenges to Human Values?

Add code
Sep 05, 2023
Viaarxiv icon