Dyna-style reinforcement learning is a powerful approach for problems where not much real data is available. The main idea is to supplement real trajectories, or sequences of sampled states over time, with simulated ones sampled from a learned model of the environment. However, in large state spaces, the problem of learning a good generative model of the environment has been open so far. We propose to use deep belief networks to learn an environment model for use in Dyna. We present our approach and validate it empirically on problems where the state observations consist of images. Our results demonstrate that using deep belief networks, which are full generative models, significantly outperforms the use of linear expectation models, proposed in Sutton et al. (2008)