Abstract:Bias detection and mitigation is an active area of research in machine learning. This work extends previous research done by the authors to provide a rigorous and more complete analysis of the bias found in AI predictive models. Admissions data spanning six years was used to create an AI model to determine whether a given student would be directly admitted into the School of Science under various scenarios at a large urban research university. During this time, submission of standardized test scores as part of an application became optional which led to interesting questions about the impact of standardized test scores on admission decisions. We developed and analyzed AI models to understand which variables are important in admissions decisions, and how the decision to exclude test scores affects the demographics of the students who are admitted. We then evaluated the predictive models to detect and analyze biases these models may carry with respect to three variables chosen to represent sensitive populations: gender, race, and whether a student was the first in his or her family to attend college. We also extended our analysis to show that the biases detected were persistent. Finally, we included several fairness metrics in our analysis and discussed the uses and limitations of these metrics.