This paper proposes a probabilistic contrastive loss function for self-supervised learning. The well-known contrastive loss is deterministic and involves a temperature hyperparameter that scales the inner product between two normed feature embeddings. By reinterpreting the temperature hyperparameter as a quantity related to the radius of the hypersphere, we derive a new loss function that involves a confidence measure which quantifies uncertainty in a mathematically grounding manner. Some intriguing properties of the proposed loss function are empirically demonstrated, which agree with human-like predictions. We believe the present work brings up a new prospective to the area of contrastive learning.