Machine learning operates at the intersection of statistics and computer science. This raises the question as to its underlying methodology. While much emphasis has been put on the close link between the process of learning from data and induction, the falsificationist component of machine learning has received minor attention. In this paper, we argue that the idea of falsification is central to the methodology of machine learning. It is commonly thought that machine learning algorithms infer general prediction rules from past observations. This is akin to a statistical procedure by which estimates are obtained from a sample of data. But machine learning algorithms can also be described as choosing one prediction rule from an entire class of functions. In particular, the algorithm that determines the weights of an artificial neural network operates by empirical risk minimization and rejects prediction rules that lack empirical adequacy. It also exhibits a behavior of implicit regularization that pushes hypothesis choice toward simpler prediction rules. We argue that taking both aspects together gives rise to a falsificationist account of artificial neural networks.