Federated learning (FL) based magnetic resonance (MR) image reconstruction can facilitate learning valuable priors from multi-site institutions without violating patient's privacy for accelerating MR imaging. However, existing methods rely on fully sampled data for collaborative training of the model. The client that only possesses undersampled data can neither participate in FL nor benefit from other clients. Furthermore, heterogeneous data distributions hinder FL from training an effective deep learning reconstruction model and thus cause performance degradation. To address these issues, we propose a Self-Supervised Federated Learning method (SSFedMRI). SSFedMRI explores the physics-based contrastive reconstruction networks in each client to realize cross-site collaborative training in the absence of fully sampled data. Furthermore, a personalized soft update scheme is designed to simultaneously capture the global shared representations among different centers and maintain the specific data distribution of each client. The proposed method is evaluated on four datasets and compared to the latest state-of-the-art approaches. Experimental results demonstrate that SSFedMRI possesses strong capability in reconstructing accurate MR images both visually and quantitatively on both in-distribution and out-of-distribution datasets.