Self-supervised learning (SSL) has grown in interest within the speech processing community, since it produces representations that are useful for many downstream tasks. SSL uses global and contextual methods to produce robust representations, where SSL even outperforms supervised models. Most self-supervised approaches, however, are limited to embedding information about, i.e., the phonemes, speaker identity, and emotion, into the extracted representations, where they become invariant to background sounds due to contrastive and auto-regressive learning. This is limiting because many downstream tasks leverage noise information to function accurately. Therefore, we propose a pre-training framework that learns information pertaining to background noise in a supervised manner, while jointly embedding speech information using a self-supervised strategy. We experiment with multiple encoders and show that our framework is useful for perceptual speech quality estimation, which relies on background cues. Our results show that the proposed approach improves performance with fewer parameters, in comparison to multiple baselines.