Abstract:We consider a general model for high-dimensional empirical risk minimization whereby the data $\mathbf{x}_i$ are $d$-dimensional isotropic Gaussian vectors, the model is parametrized by $\mathbf{\Theta}\in\mathbb{R}^{d\times k}$, and the loss depends on the data via the projection $\mathbf{\Theta}^\mathsf{T}\mathbf{x}_i$. This setting covers as special cases classical statistics methods (e.g. multinomial regression and other generalized linear models), but also two-layer fully connected neural networks with $k$ hidden neurons. We use the Kac-Rice formula from Gaussian process theory to derive a bound on the expected number of local minima of this empirical risk, under the proportional asymptotics in which $n,d\to\infty$, with $n\asymp d$. Via Markov's inequality, this bound allows to determine the positions of these minimizers (with exponential deviation bounds) and hence derive sharp asymptotics on the estimation and prediction error. In this paper, we apply our characterization to convex losses, where high-dimensional asymptotics were not (in general) rigorously established for $k\ge 2$. We show that our approach is tight and allows to prove previously conjectured results. In addition, we characterize the spectrum of the Hessian at the minimizer. A companion paper applies our general result to non-convex examples.
Abstract:In recent years, with the development of easy to use learning environments, implementing and reproducible benchmarking of reinforcement learning algorithms has been largely accelerated by utilizing these frameworks. In this article, we introduce the Dynamic Fee learning Environment (DyFEn), an open-source real-world financial network model. It can provide a testbed for evaluating different reinforcement learning techniques. To illustrate the promise of DyFEn, we present a challenging problem which is a simultaneous multi-channel dynamic fee setting for off-chain payment channels. This problem is well-known in the Bitcoin Lightning Network and has no effective solutions. Specifically, we report the empirical results of several commonly used deep reinforcement learning methods on this dynamic fee setting task as a baseline for further experiments. To the best of our knowledge, this work proposes the first virtual learning environment based on a simulation of blockchain and distributed ledger technologies, unlike many others which are based on physics simulations or game platforms.