In this paper, we analyze the effects of random sampling on adaptive diffusion networks. These networks consist in a collection of nodes that can measure and process data, and that can communicate with each other to pursue a common goal of estimating an unknown system. In particular, we consider in our theoretical analysis the diffusion least-mean-squares algorithm in a scenario in which the nodes are randomly sampled. Hence, each node may or may not adapt its local estimate at a certain iteration. Our model shows that, if the nodes cooperate, a reduction in the sampling probability leads to a slight decrease in the steady-state Network Mean-Square Deviation (NMSD), assuming that the environment is stationary and that all other parameters of the algorithm are kept fixed. Furthermore, under certain circumstances, this can also ensure the stability of the algorithm in situations in which it would otherwise be unstable. Although counter-intuitive, our findings are backed by simulation results, which match the theoretical curves well.