We show that there are several regularized loss minimization problems that can use locally perturbed data with theoretical guarantees of generalization, i.e., loss consistency. Our results quantitatively connect the convergence rates of the learning problems to the impossibility for any adversary for recovering the original data from perturbed observations. To this end, we introduce a new concept of data irrecoverability, and show that the well-studied concept of data privacy implies data irrecoverability.