In this work, we use real-world data in order to evaluate and validate a machine learning (ML)-based algorithm for physical layer functionalities. Specifically, we apply a recently introduced Gaussian mixture model (GMM)-based algorithm in order to estimate uplink channels stemming from a measurement campaign. For this estimator, there is an initial (offline) training phase, where a GMM is fitted onto given channel (training) data. Thereafter, the fitted GMM is used for (online) channel estimation. Our experiments suggest that the GMM estimator learns the intrinsic characteristics of a given base station's whole radio propagation environment. Essentially, this ambient information is captured due to universal approximation properties of the initially fitted GMM. For a large enough number of GMM components, the GMM estimator was shown to approximate the (unknown) mean squared error (MSE)-optimal channel estimator arbitrarily well. In our experiments, the GMM estimator shows significant performance gains compared to approaches that are not able to capture the ambient information. To validate the claim that ambient information is learnt, we generate synthetic channel data using a state-of-the-art channel simulator and train the GMM estimator once on these and once on the real data, and we apply the estimator once to the synthetic and once to the real data. We then observe how providing suitable ambient information in the training phase beneficially impacts the later channel estimation performance.