Abstract:(Block-)coordinate minimization is an iterative optimization method which in every iteration finds a global minimum of the objective over a variable or a subset of variables, while keeping the remaining variables constant. While for some problems, coordinate minimization converges to a global minimum (e.g., convex differentiable objective), for general (non-differentiable) convex problems this may not be the case. Despite this drawback, (block-)coordinate minimization can be an acceptable option for large-scale non-differentiable convex problems; an example is methods to solve the linear programming relaxation of the discrete energy minimization problem (MAP inference in graphical models). When block-coordinate minimization is applied to a general convex problem, in every iteration the minimizer over the current coordinate block need not be unique and therefore a single minimizer must be chosen. We propose that this minimizer be chosen from the relative interior of the set of all minimizers over the current block. We show that this rule is not worse, in a certain precise sense, than any other rule.