Ranking algorithms as an essential component of retrieval systems have been constantly improved in previous studies, especially regarding relevance-based utilities. In recent years, more and more research attempts have been proposed regarding fairness in rankings due to increasing concerns about potential discrimination and the issue of echo chamber. These attempts include traditional score-based methods that allocate exposure resources to different groups using pre-defined scoring functions or selection strategies and learning-based methods that learn the scoring functions based on data samples. Learning-based models are more flexible and achieve better performance than traditional methods. However, most of the learning-based models were trained and tested on outdated datasets where fairness labels are barely available. State-of-art models utilize relevance-based utility scores as a substitute for the fairness labels to train their fairness-aware loss, where plugging in the substitution does not guarantee the minimum loss. This inconsistency challenges the model's accuracy and performance, especially when learning is achieved by gradient descent. Hence, we propose a distribution-based fair learning framework (DLF) that does not require labels by replacing the unavailable fairness labels with target fairness exposure distributions. Experimental studies on TREC fair ranking track dataset confirm that our proposed framework achieves better fairness performance while maintaining better control over the fairness-relevance trade-off than state-of-art fair ranking frameworks.