If $p \in (1, \infty)$ and if the activation function belongs to a monotone sigmoid, relu, elu, softplus or leaky relu, we prove that neural networks are universal approximators of $L^{p}(\mathbb{R} \times [0, 1]^n)$. This generalizes corresponding universal approximation theorems on $[0,1]^n.$ Moreover if $p \in (1, \infty)$ and if the activation function belongs to a sigmoid, relu, elu, softplus or leaky relu, we show that neural networks never represents non-zero functions in $L^{p}(\mathbb{R} \times \mathbb{R}^+)$ and $L^{p}(\mathbb{R}^2)$.