Neural networks provide state-of-the-art performance on a variety of tasks. However, they are often overconfident when making predictions. This inability to properly account for uncertainty limits their application to high-risk decision making, active learning and Bayesian optimisation. To address this, Bayesian inference has been proposed as a framework for improving uncertainty estimates. In practice, Bayesian neural networks rely on poorly understood approximations for computational tractability. We prove that two commonly used approximation methods, the factorised Gaussian assumption and Monte Carlo dropout, lead to pathological estimates of the predictive uncertainty in single hidden layer ReLU networks. This indicates that more flexible approximations are needed to obtain reliable uncertainty estimates.