Abstract:This study describes a procedure for applying causal modeling to detect and mitigate algorithmic bias in a multiclass classification problem. The dataset was derived from the FairFace dataset, supplemented with emotional labels generated by the DeepFace pre-trained model. A custom Convolutional Neural Network (CNN) was developed, consisting of four convolutional blocks, followed by fully connected layers and dropout layers to mitigate overfitting. Gender bias was identified in the CNN model's classifications: Females were more likely to be classified as "happy" or "sad," while males were more likely to be classified as "neutral." To address this, the one-vs-all (OvA) technique was applied. A causal model was constructed for each emotion class to adjust the CNN model's predicted class probabilities. The adjusted probabilities for the various classes were then aggregated by selecting the class with the highest probability. The resulting debiased classifications demonstrated enhanced gender fairness across all classes, with negligible impact--or even a slight improvement--on overall accuracy. This study highlights that algorithmic fairness and accuracy are not necessarily trade-offs. All data and code for this study are publicly available for download.
Abstract:Fairness in both machine learning (ML) predictions and human decisions is critical, with ML models prone to algorithmic and data bias, and human decisions affected by subjectivity and cognitive bias. This study investigates fairness using a real-world university admission dataset with 870 profiles, leveraging three ML models, namely XGB, Bi-LSTM, and KNN. Textual features are encoded with BERT embeddings. For individual fairness, we assess decision consistency among experts with varied backgrounds and ML models, using a consistency score. Results show ML models outperform humans in fairness by 14.08% to 18.79%. For group fairness, we propose a gender-debiasing pipeline and demonstrate its efficacy in removing gender-specific language without compromising prediction performance. Post-debiasing, all models maintain or improve their classification accuracy, validating the hypothesis that fairness and performance can coexist. Our findings highlight ML's potential to enhance fairness in admissions while maintaining high accuracy, advocating a hybrid approach combining human judgement and ML models.