As face recognition algorithms become more accurate and get deployed more widely, it becomes increasingly important to ensure that the algorithms work equally well for everyone. We study the geographic performance differentials-differences in false acceptance and false rejection rates across different countries-when comparing selfies against photos from ID documents. We show how to mitigate geographic performance differentials using sampling strategies despite large imbalances in the dataset. Using vanilla domain adaptation strategies to fine-tune a face recognition CNN on domain-specific doc-selfie data improves the performance of the model on such data, but, in the presence of imbalanced training data, also significantly increases the demographic bias. We then show how to mitigate this effect by employing sampling strategies to balance the training procedure.