We introduce WPPNets, which are CNNs trained by a new unsupervised loss function for image superresolution of materials microstructures. Instead of requiring access to a large database of registered high- and low-resolution images, we only assume to know a large database of low resolution images, the forward operator and one high-resolution reference image. Then, we propose a loss function based on the Wasserstein patch prior which measures the Wasserstein-2 distance between the patch distributions of the predictions and the reference image. We demonstrate by numerical examples that WPPNets outperform other methods with similar assumptions. In particular, we show that WPPNets are much more stable under inaccurate knowledge or perturbations of the forward operator. This enables us to use them in real-world applications, where neither a large database of registered data nor the exact forward operator are given.