Picture for Kathleen C. Fraser

Kathleen C. Fraser

Detecting AI-Generated Text: Factors Influencing Detectability with Current Methods

Add code
Jun 21, 2024
Figure 1 for Detecting AI-Generated Text: Factors Influencing Detectability with Current Methods
Figure 2 for Detecting AI-Generated Text: Factors Influencing Detectability with Current Methods
Figure 3 for Detecting AI-Generated Text: Factors Influencing Detectability with Current Methods
Figure 4 for Detecting AI-Generated Text: Factors Influencing Detectability with Current Methods
Viaarxiv icon

Challenging Negative Gender Stereotypes: A Study on the Effectiveness of Automated Counter-Stereotypes

Add code
Apr 18, 2024
Viaarxiv icon

Uncovering Bias in Large Vision-Language Models with Counterfactuals

Add code
Mar 29, 2024
Viaarxiv icon

Examining Gender and Racial Bias in Large Vision-Language Models Using a Novel Dataset of Parallel Images

Add code
Feb 08, 2024
Figure 1 for Examining Gender and Racial Bias in Large Vision-Language Models Using a Novel Dataset of Parallel Images
Figure 2 for Examining Gender and Racial Bias in Large Vision-Language Models Using a Novel Dataset of Parallel Images
Figure 3 for Examining Gender and Racial Bias in Large Vision-Language Models Using a Novel Dataset of Parallel Images
Figure 4 for Examining Gender and Racial Bias in Large Vision-Language Models Using a Novel Dataset of Parallel Images
Viaarxiv icon

Concept-Based Explanations to Test for False Causal Relationships Learned by Abusive Language Classifiers

Add code
Jul 04, 2023
Figure 1 for Concept-Based Explanations to Test for False Causal Relationships Learned by Abusive Language Classifiers
Figure 2 for Concept-Based Explanations to Test for False Causal Relationships Learned by Abusive Language Classifiers
Figure 3 for Concept-Based Explanations to Test for False Causal Relationships Learned by Abusive Language Classifiers
Figure 4 for Concept-Based Explanations to Test for False Causal Relationships Learned by Abusive Language Classifiers
Viaarxiv icon

The crime of being poor

Add code
Mar 24, 2023
Figure 1 for The crime of being poor
Figure 2 for The crime of being poor
Figure 3 for The crime of being poor
Figure 4 for The crime of being poor
Viaarxiv icon

A Friendly Face: Do Text-to-Image Systems Rely on Stereotypes when the Input is Under-Specified?

Add code
Feb 14, 2023
Viaarxiv icon

Towards Procedural Fairness: Uncovering Biases in How a Toxic Language Classifier Uses Sentiment Information

Add code
Oct 19, 2022
Figure 1 for Towards Procedural Fairness: Uncovering Biases in How a Toxic Language Classifier Uses Sentiment Information
Figure 2 for Towards Procedural Fairness: Uncovering Biases in How a Toxic Language Classifier Uses Sentiment Information
Figure 3 for Towards Procedural Fairness: Uncovering Biases in How a Toxic Language Classifier Uses Sentiment Information
Figure 4 for Towards Procedural Fairness: Uncovering Biases in How a Toxic Language Classifier Uses Sentiment Information
Viaarxiv icon

Challenges in Applying Explainability Methods to Improve the Fairness of NLP Models

Add code
Jun 08, 2022
Figure 1 for Challenges in Applying Explainability Methods to Improve the Fairness of NLP Models
Figure 2 for Challenges in Applying Explainability Methods to Improve the Fairness of NLP Models
Viaarxiv icon

Does Moral Code Have a Moral Code? Probing Delphi's Moral Philosophy

Add code
May 25, 2022
Figure 1 for Does Moral Code Have a Moral Code? Probing Delphi's Moral Philosophy
Figure 2 for Does Moral Code Have a Moral Code? Probing Delphi's Moral Philosophy
Figure 3 for Does Moral Code Have a Moral Code? Probing Delphi's Moral Philosophy
Figure 4 for Does Moral Code Have a Moral Code? Probing Delphi's Moral Philosophy
Viaarxiv icon