Universal Approximation Theorems establish the density of various classes of neural network function approximators in $C(K, \mathbb{R}^m)$, where $K \subset \mathbb{R}^n$ is compact. In this paper, we aim to extend these guarantees by establishing conditions on learning tasks that guarantee their continuity. We consider learning tasks given by conditional expectations $x \mapsto \mathrm{E}\left[Y \mid X = x\right]$, where the learning target $Y = f \circ L$ is a potentially pathological transformation of some underlying data-generating process $L$. Under a factorization $L = T \circ W$ for the data-generating process where $T$ is thought of as a deterministic map acting on some random input $W$, we establish conditions (that might be easily verified using knowledge of $T$ alone) that guarantee the continuity of practically \textit{any} derived learning task $x \mapsto \mathrm{E}\left[f \circ L \mid X = x\right]$. We motivate the realism of our conditions using the example of randomized stable matching, thus providing a theoretical justification for the continuity of real-world learning tasks.