Abstract:The capability of recurrent neural networks to approximate trajectories of a random dynamical system, with random inputs, on non-compact domains, and over an indefinite or infinite time horizon is considered. The main result states that certain random trajectories over an infinite time horizon may be approximated to any desired accuracy, uniformly in time, by a certain class of deep recurrent neural networks, with simple feedback structures. The formulation here contrasts with related literature on this topic, much of which is restricted to compact state spaces and finite time intervals. The model conditions required here are natural, mild, and easy to test, and the proof is very simple.
Abstract:We consider the Bayesian optimal filtering problem: i.e. estimating some conditional statistics of a latent time-series signal from an observation sequence. Classical approaches often rely on the use of assumed or estimated transition and observation models. Instead, we formulate a generic recurrent neural network framework and seek to learn directly a recursive mapping from observational inputs to the desired estimator statistics. The main focus of this article is the approximation capabilities of this framework. We provide approximation error bounds for filtering in general non-compact domains. We also consider strong time-uniform approximation error bounds that guarantee good long-time performance. We discuss and illustrate a number of practical concerns and implications of these results.