Picture for Isar Nejadgholi

Isar Nejadgholi

Challenging Negative Gender Stereotypes: A Study on the Effectiveness of Automated Counter-Stereotypes

Add code
Apr 18, 2024
Viaarxiv icon

Projective Methods for Mitigating Gender Bias in Pre-trained Language Models

Add code
Mar 27, 2024
Viaarxiv icon

Socially Aware Synthetic Data Generation for Suicidal Ideation Detection Using Large Language Models

Add code
Jan 25, 2024
Viaarxiv icon

Concept-Based Explanations to Test for False Causal Relationships Learned by Abusive Language Classifiers

Add code
Jul 04, 2023
Viaarxiv icon

ChatGPT for Suicide Risk Assessment on Social Media: Quantitative Evaluation of Model Performance, Potentials and Limitations

Add code
Jun 15, 2023
Viaarxiv icon

The crime of being poor

Add code
Mar 24, 2023
Viaarxiv icon

A Friendly Face: Do Text-to-Image Systems Rely on Stereotypes when the Input is Under-Specified?

Add code
Feb 14, 2023
Viaarxiv icon

BLOOM: A 176B-Parameter Open-Access Multilingual Language Model

Add code
Nov 09, 2022
Viaarxiv icon

Towards Procedural Fairness: Uncovering Biases in How a Toxic Language Classifier Uses Sentiment Information

Add code
Oct 19, 2022
Figure 1 for Towards Procedural Fairness: Uncovering Biases in How a Toxic Language Classifier Uses Sentiment Information
Figure 2 for Towards Procedural Fairness: Uncovering Biases in How a Toxic Language Classifier Uses Sentiment Information
Figure 3 for Towards Procedural Fairness: Uncovering Biases in How a Toxic Language Classifier Uses Sentiment Information
Figure 4 for Towards Procedural Fairness: Uncovering Biases in How a Toxic Language Classifier Uses Sentiment Information
Viaarxiv icon

Challenges in Applying Explainability Methods to Improve the Fairness of NLP Models

Add code
Jun 08, 2022
Figure 1 for Challenges in Applying Explainability Methods to Improve the Fairness of NLP Models
Figure 2 for Challenges in Applying Explainability Methods to Improve the Fairness of NLP Models
Viaarxiv icon