Many learning tasks in physics and chemistry involve global spatial symmetries as well as permutational symmetry between particles. The standard approach to such problems is equivariant neural networks, which employ tensor products between various tensors that transform under the spatial group. However, as the number of different tensors and the complexity of relationships between them increases, the bookkeeping associated with ensuring parsimony as well as equivariance quickly becomes nontrivial. In this paper, we propose to use fusion diagrams, a technique widely used in simulating SU($2$)-symmetric quantum many-body problems, to design new equivariant components for use in equivariant neural networks. This yields a diagrammatic approach to constructing new neural network architectures. We show that when applied to particles in a given local neighborhood, the resulting components, which we call fusion blocks, are universal approximators of any continuous equivariant function defined on the neighborhood. As a practical demonstration, we incorporate a fusion block into a pre-existing equivariant architecture (Cormorant) and show that it improves performance on benchmark molecular learning tasks.