In many real-world settings, we are interested in learning invariant and equivariant functions over nested or multiresolution structures, such as a set of sequences, a graph of graphs, or a multiresolution image. While equivariant linear maps and by extension multilayer perceptrons (MLPs) for many of the individual basic structures are known, a formalism for dealing with a hierarchy of symmetry transformations is lacking. Observing that the transformation group for a nested structure corresponds to the ``wreath product'' of the symmetry groups of the building blocks, we show how to obtain the equivariant map for hierarchical data-structures using an intuitive combination of the equivariant maps for the individual blocks. To demonstrate the effectiveness of this type of model, we use a hierarchy of translation and permutation symmetries for learning on point cloud data, and report state-of-the-art on \kw{semantic3d} and \kw{s3dis}, two of the largest real-world benchmarks for 3D semantic segmentation.