Abstract:Operator learning is a recent development in the simulation of Partial Differential Equations (PDEs) by means of neural networks. The idea behind this approach is to learn the behavior of an operator, such that the resulting neural network is an (approximate) mapping in infinite-dimensional spaces that is capable of (approximately) simulating the solution operator governed by the PDE. In our work, we study some general approximation capabilities for linear differential operators by approximating the corresponding symbol in the Fourier domain. Analogous to the structure of the class of H\"ormander-Symbols, we consider the approximation with respect to a topology that is induced by a sequence of semi-norms. In that sense, we measure the approximation error in terms of a Fr\'echet metric, and our main result identifies sufficient conditions for achieving a predefined approximation error. Secondly, we then focus on a natural extension of our main theorem, in which we manage to reduce the assumptions on the sequence of semi-norms. Based on some existing approximation result for the exponential spectral Barron space, we then present a concrete example of symbols that can be approximated well, and we also show the analogy of this approximation to the design of digital filters in Signal Processing.
Abstract:In this work, we consider the approximation capabilities of shallow neural networks in weighted Sobolev spaces for functions in the spectral Barron space. The existing literature already covers several cases, in which the spectral Barron space can be approximated well, i.e., without curse of dimensionality, by shallow networks and several different classes of activation function. The limitations of the existing results are mostly on the error measures that were considered, in which the results are restricted to Sobolev spaces over a bounded domain. We will here treat two cases that extend upon the existing results. Namely, we treat the case with bounded domain and Muckenhoupt weights and the case, where the domain is allowed to be unbounded and the weights are required to decay. We first present embedding results for the more general weighted Fourier-Lebesgue spaces in the weighted Sobolev spaces and then we establish asymptotic approximation rates for shallow neural networks that come without curse of dimensionality.
Abstract:Approximation capabilities of shallow neural networks (SNNs) form an integral part in understanding the properties of deep neural networks (DNNs). In the study of these approximation capabilities some very popular classes of target functions are the so-called spectral Barron spaces. This spaces are of special interest when it comes to the approximation of partial differential equation (PDE) solutions. It has been shown that the solution of certain static PDEs will lie in some spectral Barron space. In order to alleviate the limitation to static PDEs and include a time-domain that might have a different regularity than the space domain, we extend the notion of spectral Barron spaces to anisotropic weighted Fourier-Lebesgue spaces. In doing so, we consider target functions that have two blocks of variables, among which each block is allowed to have different decay and integrability properties. For these target functions we first study the inclusion of anisotropic weighted Fourier-Lebesgue spaces in the Bochner-Sobolev spaces. With that we can now also measure the approximation error in terms of an anisotropic Sobolev norm, namely the Bochner-Sobolev norm. We use this observation in a second step where we establish a bound on the approximation rate for functions from the anisotropic weighted Fourier-Lebesgue spaces and approximation via SNNs in the Bochner-Sobolev norm.