Picture for Angelina Wang

Angelina Wang

Measuring machine learning harms from stereotypes: requires understanding who is being harmed by which errors in what ways

Add code
Feb 06, 2024
Viaarxiv icon

Measuring Implicit Bias in Explicitly Unbiased Large Language Models

Add code
Feb 06, 2024
Viaarxiv icon

Overcoming Bias in Pretrained Models by Manipulating the Finetuning Dataset

Add code
Mar 10, 2023
Viaarxiv icon

Gender Artifacts in Visual Datasets

Add code
Jun 18, 2022
Figure 1 for Gender Artifacts in Visual Datasets
Figure 2 for Gender Artifacts in Visual Datasets
Figure 3 for Gender Artifacts in Visual Datasets
Figure 4 for Gender Artifacts in Visual Datasets
Viaarxiv icon

Measuring Representational Harms in Image Captioning

Add code
Jun 14, 2022
Figure 1 for Measuring Representational Harms in Image Captioning
Figure 2 for Measuring Representational Harms in Image Captioning
Figure 3 for Measuring Representational Harms in Image Captioning
Figure 4 for Measuring Representational Harms in Image Captioning
Viaarxiv icon

Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentation, and Performing Evaluation

Add code
May 10, 2022
Figure 1 for Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentation, and Performing Evaluation
Figure 2 for Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentation, and Performing Evaluation
Figure 3 for Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentation, and Performing Evaluation
Figure 4 for Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentation, and Performing Evaluation
Viaarxiv icon

Understanding and Evaluating Racial Biases in Image Captioning

Add code
Jun 16, 2021
Figure 1 for Understanding and Evaluating Racial Biases in Image Captioning
Figure 2 for Understanding and Evaluating Racial Biases in Image Captioning
Figure 3 for Understanding and Evaluating Racial Biases in Image Captioning
Figure 4 for Understanding and Evaluating Racial Biases in Image Captioning
Viaarxiv icon

Directional Bias Amplification

Add code
Feb 24, 2021
Figure 1 for Directional Bias Amplification
Figure 2 for Directional Bias Amplification
Figure 3 for Directional Bias Amplification
Figure 4 for Directional Bias Amplification
Viaarxiv icon

ViBE: A Tool for Measuring and Mitigating Bias in Image Datasets

Add code
Apr 16, 2020
Figure 1 for ViBE: A Tool for Measuring and Mitigating Bias in Image Datasets
Figure 2 for ViBE: A Tool for Measuring and Mitigating Bias in Image Datasets
Figure 3 for ViBE: A Tool for Measuring and Mitigating Bias in Image Datasets
Figure 4 for ViBE: A Tool for Measuring and Mitigating Bias in Image Datasets
Viaarxiv icon

Learning Robotic Manipulation through Visual Planning and Acting

Add code
May 11, 2019
Figure 1 for Learning Robotic Manipulation through Visual Planning and Acting
Figure 2 for Learning Robotic Manipulation through Visual Planning and Acting
Figure 3 for Learning Robotic Manipulation through Visual Planning and Acting
Figure 4 for Learning Robotic Manipulation through Visual Planning and Acting
Viaarxiv icon