Picture for Toon Calders

Toon Calders

"Patriarchy Hurts Men Too." Does Your Model Agree? A Discussion on Fairness Assumptions

Add code
Aug 01, 2024
Viaarxiv icon

FairFlow: An Automated Approach to Model-based Counterfactual Data Augmentation For NLP

Add code
Jul 23, 2024
Viaarxiv icon

Cherry on the Cake: Fairness is NOT an Optimization Problem

Add code
Jun 24, 2024
Viaarxiv icon

How to be fair? A study of label and selection bias

Add code
Mar 21, 2024
Viaarxiv icon

Beyond Accuracy-Fairness: Stop evaluating bias mitigation methods solely on between-group metrics

Add code
Jan 24, 2024
Viaarxiv icon

Model-based Counterfactual Generator for Gender Bias Mitigation

Add code
Nov 06, 2023
Viaarxiv icon

How Far Can It Go?: On Intrinsic Gender Bias Mitigation for Text Classification

Add code
Jan 30, 2023
Figure 1 for How Far Can It Go?: On Intrinsic Gender Bias Mitigation for Text Classification
Figure 2 for How Far Can It Go?: On Intrinsic Gender Bias Mitigation for Text Classification
Figure 3 for How Far Can It Go?: On Intrinsic Gender Bias Mitigation for Text Classification
Figure 4 for How Far Can It Go?: On Intrinsic Gender Bias Mitigation for Text Classification
Viaarxiv icon

Text Style Transfer for Bias Mitigation using Masked Language Modeling

Add code
Jan 21, 2022
Figure 1 for Text Style Transfer for Bias Mitigation using Masked Language Modeling
Figure 2 for Text Style Transfer for Bias Mitigation using Masked Language Modeling
Figure 3 for Text Style Transfer for Bias Mitigation using Masked Language Modeling
Figure 4 for Text Style Transfer for Bias Mitigation using Masked Language Modeling
Viaarxiv icon

Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in Pretrained Language Models

Add code
Dec 14, 2021
Figure 1 for Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in Pretrained Language Models
Figure 2 for Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in Pretrained Language Models
Figure 3 for Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in Pretrained Language Models
Figure 4 for Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in Pretrained Language Models
Viaarxiv icon

Detecting and Explaining Drifts in Yearly Grant Applications

Add code
Oct 16, 2018
Figure 1 for Detecting and Explaining Drifts in Yearly Grant Applications
Figure 2 for Detecting and Explaining Drifts in Yearly Grant Applications
Figure 3 for Detecting and Explaining Drifts in Yearly Grant Applications
Figure 4 for Detecting and Explaining Drifts in Yearly Grant Applications
Viaarxiv icon