As a popular tool for producing meaningful and interpretable models, large-scale sparse learning works efficiently when the underlying structures are indeed or close to sparse. However, naively applying the existing regularization methods can result in misleading outcomes due to model misspecification. In particular, the direct sparsity assumption on coefficient vectors has been questioned in real applications. Therefore, we consider nonsparse learning with the conditional sparsity structure that the coefficient vector becomes sparse after taking out the impacts of certain unobservable latent variables. A new methodology of nonsparse learning with latent variables (NSL) is proposed to simultaneously recover the significant observable predictors and latent factors as well as their effects. We explore a common latent family incorporating population principal components and derive the convergence rates of both sample principal components and their score vectors that hold for a wide class of distributions. With the properly estimated latent variables, properties including model selection consistency and oracle inequalities under various prediction and estimation losses are established for the proposed methodology. Our new methodology and results are evidenced by simulation and real data examples.