Fairness,the impartial treatment towards individuals or groups regardless of their inherent or acquired characteristics [20], is a critical challenge for the successful implementation of Artificial Intelligence (AI) in multiple fields like finances, human capital, and housing. A major struggle for the development of fair AI models lies in the bias implicit in the data available to train such models. Filtering or sampling the dataset before training can help ameliorate model bias but can also reduce model performance and the bias impact can be opaque. In this paper, we propose a method for visualizing the biases inherent in a dataset and understanding the potential trade-offs between fairness and accuracy. Our method builds on quality-diversity optimization, in particular Covariance Matrix Adaptation Multi-dimensional Archive of Phenotypic Elites (MAP-Elites). Our method provides a visual representation of bias in models, allows users to identify models within a minimal threshold of fairness, and determines the trade-off between fairness and accuracy.