Domain shifts are ubiquitous in machine learning, and can substantially degrade a model's performance when deployed to real-world data. To address this, distribution alignment methods aim to learn feature representations which are invariant across domains, by minimising the discrepancy between the distributions. However, the discrepancy estimates can be extremely noisy when training via stochastic gradient descent (SGD), and shifts in the relative proportions of different subgroups can lead to domain misalignments; these can both stifle the benefits of the method. This paper proposes to improve these estimates by inducing diversity in each sampled minibatch. This simultaneously balances the data and reduces the variance of the gradients, thereby enhancing the model's generalisation ability. We describe two options for diversity-based data samplers, based on the k-determinantal point process (k-DPP) and the k-means++ algorithm, which can function as drop-in replacements for a standard random sampler. On a real-world domain shift task of bioacoustic event detection, we show that both options 1) yield minibatches which are more representative of the full dataset; 2) reduce the distance estimation error between distributions, for a given sample size; and 3) improve out-of-distribution accuracy for two distribution alignment algorithms, as well as standard ERM.