https://github.com/alexshtf/inc_prox_pt.
Model training algorithms which observe a small portion of the training set in each computational step are ubiquitous in practical machine learning, and include both stochastic and online optimization methods. In the vast majority of cases, such algorithms typically observe the training samples via the gradients of the cost functions the samples incur. Thus, these methods exploit are the \emph{slope} of the cost functions via their first-order approximations. To address limitations of gradient-based methods, such as sensitivity to step-size choice in the stochastic setting, or inability to exploit small function variability in the online setting, several streams of research attempt to exploit more information about the cost functions than just their gradients via the well-known proximal framework of optimization. However, implementing such methods in practice poses a challenge, since each iteration step boils down to computing a proximal operator, which may not be easy. In this work we provide efficient algorithms and corresponding implementations of proximal operators in order to make experimentation with incremental proximal optimization algorithms accessible to a larger audience of researchers and practitioners, and in particular to promote additional theoretical research into these methods by closing the gap between their theoretical description in research papers and their use in practice. The corresponding code is published at