Normals with unknown variance (NUV) can represent many useful priors including $L_p$ norms and other sparsifying priors, and they blend well with linear-Gaussian models and Gaussian message passing algorithms. In this paper, we elaborate on recently proposed discretizing NUV priors, and we propose new NUV representations of half-space constraints and box constraints. We then demonstrate the use of such NUV representations with exemplary applications in model predictive control, with a variety of constraints on the input, the output, or the internal stateof the controlled system. In such applications, the computations boil down to iterations of Kalman-type forward-backward recursions, with a complexity (per iteration) that is linear in the planning horizon. In consequence, this approach can handle long planning horizons, which distinguishes it from the prior art. For nonconvex constraints, this approach has no claim to optimality, but it is empirically very effective.