Abstract:A major limitation in applying deep learning to artificial intelligence (AI) systems is the scarcity of high-quality curated datasets. We investigate strong augmentation based self-supervised learning (SSL) techniques to address this problem. Using breast cancer detection as an example, we first identify a mammogram-specific transformation paradigm and then systematically compare four recent SSL methods representing a diversity of approaches. We develop a method to convert a pretrained model from making predictions on uniformly tiled patches to whole images, and an attention-based pooling method that improves the classification performance. We found that the best SSL model substantially outperformed the baseline supervised model. The best SSL model also improved the data efficiency of sample labeling by nearly 4-fold and was highly transferrable from one dataset to another. SSL represents a major breakthrough in computer vision and may help the AI for medical imaging field to shift away from supervised learning and dependency on scarce labels.