Physics-guided deep learning (PG-DL) has emerged as a powerful tool for accelerated MRI reconstruction, while often necessitating a database of fully-sampled measurements for training. Recent self-supervised and unsupervised learning approaches enable training without fully-sampled data. However, a database of undersampled measurements may not be available in many scenarios, especially for scans involving contrast or recently developed sequences, necessitating new methodology for scan-specific PG-DL reconstructions. A main challenge for developing scan-specific PG-DL methods is the large number of parameters, making it prone to over-fitting. Moreover, database-trained models may not generalize to unseen measurements that differ in terms of SNR, image contrast or sampling pattern. In this work, we propose a zero-shot self-supervised learning approach to perform scan-specific PG-DL reconstruction to tackle these issues. The proposed approach splits available measurements for each scan into three disjoint sets. Two of these sets are used to enforce data consistency and define loss during training, while the last set is used to establish an early stopping criterion. In the presence of models pre-trained on a database, we show that the proposed approach can be adapted as scan-specific fine-tuning via transfer learning to further improve reconstruction quality.