Abstract:In the present paper, we consider one-hidden layer ANNs with a feedforward architecture, also referred to as shallow or two-layer networks, so that the structure is determined by the number and types of neurons. The determination of the parameters that define the function, called training, is done via the resolution of the approximation problem, so by imposing the interpolation through a set of specific nodes. We present the case where the parameters are trained using a procedure that is referred to as Extreme Learning Machine (ELM) that leads to a linear interpolation problem. In such hypotheses, the existence of an ANN interpolating function is guaranteed. The focus is then on the accuracy of the interpolation outside of the given sampling interpolation nodes when they are the equispaced, the Chebychev, and the randomly selected ones. The study is motivated by the well-known bell-shaped Runge example, which makes it clear that the construction of a global interpolating polynomial is accurate only if trained on suitably chosen nodes, ad example the Chebychev ones. In order to evaluate the behavior when growing the number of interpolation nodes, we raise the number of neurons in our network and compare it with the interpolating polynomial. We test using Runge's function and other well-known examples with different regularities. As expected, the accuracy of the approximation with a global polynomial increases only if the Chebychev nodes are considered. Instead, the error for the ANN interpolating function always decays and in most cases we observe that the convergence follows what is observed in the polynomial case on Chebychev nodes, despite the set of nodes used for training.
Abstract:We propose a numerical scheme based on Random Projection Neural Networks (RPNN) for the solution of Ordinary Differential Equations (ODEs) with a focus on stiff problems. In particular, we use an Extreme Learning Machine, a single-hidden layer Feedforward Neural Network with Radial Basis Functions which widths are uniformly distributed random variables, while the values of the weights between the input and the hidden layer are set equal to one. The numerical solution is obtained by constructing a system of nonlinear algebraic equations, which is solved with respect to the output weights using the Gauss-Newton method. For our illustrations, we apply the proposed machine learning approach to solve two benchmark stiff problems, namely the Rober and the van der Pol ones (the latter with large values of the stiffness parameter), and we perform a comparison with well-established methods such as the adaptive Runge-Kutta method based on the Dormand-Prince pair, and a variable-step variable-order multistep solver based on numerical differentiation formulas, as implemented in the \texttt{ode45} and \texttt{ode15s} MATLAB functions, respectively. We show that our proposed scheme yields good numerical approximation accuracy without being affected by the stiffness, thus outperforming in same cases the \texttt{ode45} and \texttt{ode15s} functions. Importantly, upon training using a fixed number of collocation points, the proposed scheme approximates the solution in the whole domain in contrast to the classical time integration methods.