distributions.To address this challenge, the robust satisficing framework has emerged recently to mitigate ambiguity in the true distribution. Distinguished by its interpretable hyperparameter and enhanced performance guarantees, this approach has attracted increasing attention from academia. However, its applicability in tackling general machine learning problems, notably deep neural networks, remains largely unexplored due to the computational challenges in solving this model efficiently across general loss functions. In this study, we delve into the Kullback Leibler divergence based robust satisficing model under a general loss function, presenting analytical interpretations, diverse performance guarantees, efficient and stable numerical methods, convergence analysis, and an extension tailored for hierarchical data structures. Through extensive numerical experiments across three distinct machine learning tasks, we demonstrate the superior performance of our model compared to state-of-the-art benchmarks.
Empirical risk minimization, a cornerstone in machine learning, is often hindered by the Optimizer's Curse stemming from discrepancies between the empirical and true data-generating