Abstract:Nonlinear dynamical systems exposed to changing forcing can exhibit catastrophic transitions between alternative and often markedly different states. The phenomenon of critical slowing down (CSD) can be used to anticipate such transitions if caused by a bifurcation and if the change in forcing is slow compared to the internal time scale of the system. However, in many real-world situations, these assumptions are not met and transitions can be triggered because the forcing exceeds a critical rate. For example, given the pace of anthropogenic climate change in comparison to the internal time scales of key Earth system components, such as the polar ice sheets or the Atlantic Meridional Overturning Circulation, such rate-induced tipping poses a severe risk. Moreover, depending on the realisation of random perturbations, some trajectories may transition across an unstable boundary, while others do not, even under the same forcing. CSD-based indicators generally cannot distinguish these cases of noise-induced tipping versus no tipping. This severely limits our ability to assess the risks of tipping, and to predict individual trajectories. To address this, we make a first attempt to develop a deep learning framework to predict transition probabilities of dynamical systems ahead of rate-induced transitions. Our method issues early warnings, as demonstrated on three prototypical systems for rate-induced tipping, subjected to time-varying equilibrium drift and noise perturbations. Exploiting explainable artificial intelligence methods, our framework captures the fingerprints necessary for early detection of rate-induced tipping, even in cases of long lead times. Our findings demonstrate the predictability of rate-induced and noise-induced tipping, advancing our ability to determine safe operating spaces for a broader class of dynamical systems than possible so far.
Abstract:The echo index counts the number of simultaneously stable asymptotic responses of a nonautonomous (i.e. input-driven) dynamical system. It generalizes the well-known echo state property for recurrent neural networks - this corresponds to the echo index being equal to one. In this paper, we investigate how the echo index depends on parameters that govern typical responses to a finite-state ergodic external input that forces the dynamics. We consider the echo index for a nonautonomous system that switches between a finite set of maps, where we assume that each map possesses a finite set of hyperbolic equilibrium attractors. We find the minimum and maximum repetitions of each map are crucial for the resulting echo index. Casting our theoretical findings in the RNN computing framework, we obtain that for small amplitude forcing the echo index corresponds to the number of attractors for the input-free system, while for large amplitude forcing, the echo index reduces to one. The intermediate regime is the most interesting; in this region the echo index depends not just on the amplitude of forcing but also on more subtle properties of the input.
Abstract:Machine learning has become a basic tool in scientific research and for the development of technologies with significant impact on society. In fact, such methods allow to discover regularities in data and make predictions without explicit knowledge of the rules governing the system under analysis. However, a price must be paid for exploiting such a modeling flexibility: machine learning methods are usually black-box, meaning that it is difficult to fully understand what the machine is doing and how. This poses constraints on the applicability of such methods, neglecting the possibility to gather novel scientific insights from experimental data. Our research aims to open the black-box of recurrent neural networks, an important family of neural networks suitable to process sequential data. Here, we propose a novel methodology that allows to provide a mechanistic interpretation of their behaviour when used to solve computational tasks. The methodology is based on mathematical constructs called excitable network attractors, which are models represented as networks in phase space composed by stable attractors and excitable connections between them. As the behaviour of recurrent neural networks depends on training and inputs driving the autonomous system, we introduce an algorithm to extract network attractors directly from a trajectory generated by the neural network while solving tasks. Simulations conducted on a controlled benchmark highlight the relevance of the proposed methodology for interpreting the behaviour of recurrent neural networks on tasks that involve learning a finite number of stable states.