Point clouds have attracted increasing attention as a natural representation of 3D shapes. Significant progress has been made in developing methods for point cloud analysis, which often requires costly human annotation as supervision in practice. To address this issue, we propose a novel self-contrastive learning for self-supervised point cloud representation learning, aiming to capture both local geometric patterns and nonlocal semantic primitives based on the nonlocal self-similarity of point clouds. The contributions are two-fold: on the one hand, instead of contrasting among different point clouds as commonly employed in contrastive learning, we exploit self-similar point cloud patches within a single point cloud as positive samples and otherwise negative ones to facilitate the task of contrastive learning. Such self-contrastive learning is well aligned with the emerging paradigm of self-supervised learning for point cloud analysis. On the other hand, we actively learn hard negative samples that are close to positive samples in the representation space for discriminative feature learning, which are sampled conditional on each anchor patch leveraging on the degree of self-similarity. Experimental results show that the proposed method achieves state-of-the-art performance on widely used benchmark datasets for self-supervised point cloud segmentation and transfer learning for classification.