Many real prediction tasks such as molecular property prediction require ability to extrapolate to unseen domains. The success in these tasks typically hinges on finding a good representation. In this paper, we extend invariant risk minimization (IRM) by recasting the simultaneous optimality condition in terms of regret, finding instead a representation that enables the predictor to be optimal against an oracle with hindsight access on held-out environments. The change refocuses the principle on generalization and doesn't collapse even with strong predictors that can perfectly fit all the training data. Our regret minimization (RGM) approach can be further combined with adaptive domain perturbations to handle combinatorially defined environments. We evaluate our method on two real-world applications: molecule property prediction and protein homology detection and show that RGM significantly outperforms previous state-of-the-art domain generalization techniques.