Gamma-minimax estimation is an approach to incorporate prior information into an estimation procedure when it is implausible to specify one particular prior distribution. In this approach, we aim for an estimator that minimizes the worst-case Bayes risk over a set $\Gamma$ of prior distributions. Traditionally, Gamma-minimax estimation is defined for parametric models. In this paper, we define Gamma-minimaxity for general models and propose iterative algorithms with convergence guarantees to compute Gamma-minimax estimators for a general model space and a set of prior distributions constrained by generalized moments. We also propose encoding the space of candidate estimators by neural networks to enable flexible estimation. We illustrate our method in two settings, namely entropy estimation and a problem that arises in biodiversity studies.