3D Morphable Models (3DMMs) are powerful statistical tools for representing and modeling 3D faces. To build a 3DMM, a training set of fully registered face scans is required, and its modeling capabilities directly depend on the variability contained in the training data. Thus, accurately establishing a dense point-to-point correspondence across heterogeneous scans with sufficient diversity in terms of identities, ethnicities, or expressions becomes essential. In this manuscript, we present an approach that leverages a 3DMM to transfer its dense semantic annotation across a large set of heterogeneous 3D faces, establishing a dense correspondence between them. To this aim, we propose a novel formulation to learn a set of sparse deformation components with local support on the face that, together with an original non-rigid deformation algorithm, allow precisely fitting the 3DMM to arbitrary faces and transfer its semantic annotation. We experimented our approach on three large and diverse datasets, showing it can effectively generalize to very different samples and accurately establish a dense correspondence even in presence of complex facial expressions or unseen deformations. As main outcome of this work, we build a heterogeneous, large-scale 3DMM from more than 9,000 fully registered scans obtained joining the three datasets together.