Measure-preserving neural networks are well-developed invertible models, however, the approximation capabilities remain unexplored. This paper rigorously establishes the general sufficient conditions for approximating measure-preserving maps using measure-preserving neural networks. It is shown that for compact $U \subset \R^D$ with $D\geq 2$, every measure-preserving map $\psi: U\to \R^D$ which is injective and bounded can be approximated in the $L^p$-norm by measure-preserving neural networks. Specifically, the differentiable maps with $\pm 1$ determinants of Jacobians are measure-preserving, injective and bounded on $U$, thus hold the approximation property.