Abstract:In model selection problems for machine learning, the desire for a well-performing model with meaningful structure is typically expressed through a regularized optimization problem. In many scenarios, however, the meaningful structure is specified in some discrete space, leading to difficult nonconvex optimization problems. In this paper, we relate the model selection problem with structure-promoting regularizers to submodular function minimization defined with continuous and discrete arguments. In particular, we leverage submodularity theory to identify a class of these problems that can be solved exactly and efficiently with an agnostic combination of discrete and continuous optimization routines. We show how simple continuous or discrete constraints can also be handled for certain problem classes, motivated by robust optimization. Finally, we numerically validate our theoretical results with several proof-of-concept examples, comparing against state-of-the-art algorithms.