Recently, self-supervised learning has attracted great attention since it only requires unlabeled data for training. Contrastive learning is a popular approach for self-supervised learning and empirically performs well in practice. However, the theoretical understanding of its generalization ability on downstream tasks is not well studied. To this end, we present a theoretical explanation of how contrastive self-supervised pre-trained models generalize to downstream tasks. Concretely, we quantitatively show that the self-supervised model has generalization ability on downstream classification tasks if it embeds input data into a feature space with distinguishing centers of classes and closely clustered intra-class samples. With the above conclusion, we further explore SimCLR and Barlow Twins, which are two canonical contrastive self-supervised methods. We prove that the aforementioned feature space can be obtained via any of the methods, and thus explain their success on the generalization on downstream classification tasks. Finally, various experiments are also conducted to verify our theoretical findings.