We propose a unique solution to tackle the often-competing goals of fairness and utility in machine learning classification tasks. While fairness ensures that the model's predictions are unbiased and do not discriminate against any particular group, utility focuses on maximizing the accuracy of the model's predictions. Our aim is to investigate the relationship between uncertainty and fairness. Our approach leverages this concept by employing Bayesian learning to estimate the uncertainty in sample predictions where the estimation is independent of confounding effects related to the protected attribute. Through empirical evidence, we show that samples with low classification uncertainty are modeled more accurately and fairly than those with high uncertainty, which may have biased representations and higher prediction errors. To address the challenge of balancing fairness and utility, we propose a novel fairness-utility objective that is defined based on uncertainty quantification. The weights in this objective are determined by the level of uncertainty, allowing us to optimize both fairness and utility simultaneously. Experiments on real-world datasets demonstrate the effectiveness of our approach. Our results show that our method outperforms state-of-the-art methods in terms of the fairness-utility tradeoff and this applies to both group and individual fairness metrics. This work presents a fresh perspective on the trade-off between accuracy and fairness in machine learning and highlights the potential of using uncertainty as a means to achieve optimal fairness and utility.