Abstract:In this paper, we introduce the proper latent decomposition (PLD) as a generalization of the proper orthogonal decomposition (POD) on manifolds. PLD is a nonlinear reduced-order modeling technique for compressing high-dimensional data into nonlinear coordinates. First, we compute a reduced set of intrinsic coordinates (latent space) to accurately describe a flow with fewer degrees of freedom than the numerical discretization. The latent space, which is geometrically a manifold, is inferred by an autoencoder. Second, we leverage tools from differential geometry to develop numerical methods for operating directly on the latent space; namely, a metric-constrained Eikonal solver for distance computations. With this proposed numerical framework, we propose an algorithm to perform PLD on the manifold. Third, we demonstrate results for a laminar flow case and the turbulent Kolmogorov flow. For the laminar flow case, we are able to identify a semi-analytical expression for the solution of Navier-Stokes; in the Kolmogorov flow case, we are able to identify a dominant mode that exhibits physical structures, which are compared with POD. This work opens opportunities for analyzing autoencoders and latent spaces, nonlinear reduced-order modeling and scientific insights into the structure of high-dimensional data.
Abstract:The data-driven learning of solutions of partial differential equations can be based on a divide-and-conquer strategy. First, the high dimensional data is compressed to a latent space with an autoencoder; and, second, the temporal dynamics are inferred on the latent space with a form of recurrent neural network. In chaotic systems and turbulence, convolutional autoencoders and echo state networks (CAE-ESN) successfully forecast the dynamics, but little is known about whether the stability properties can also be inferred. We show that the CAE-ESN model infers the invariant stability properties and the geometry of the tangent space in the low-dimensional manifold (i.e. the latent space) through Lyapunov exponents and covariant Lyapunov vectors. This work opens up new opportunities for inferring the stability of high-dimensional chaotic systems in latent spaces.
Abstract:Partial differential equations, and their chaotic solutions, are pervasive in the modelling of complex systems in engineering, science, and beyond. Data-driven methods can find solutions to partial differential equations with a divide-and-conquer strategy: The solution is sought in a latent space, on which the temporal dynamics are inferred (``latent-space'' approach). This is achieved by, first, compressing the data with an autoencoder, and, second, inferring the temporal dynamics with recurrent neural networks. The overarching goal of this paper is to show that a latent-space approach can not only infer the solution of a chaotic partial differential equation, but it can also predict the stability properties of the physical system. First, we employ the convolutional autoencoder echo state network (CAE-ESN) on the chaotic Kuramoto-Sivashinsky equation for various chaotic regimes. We show that the CAE-ESN (i) finds a low-dimensional latent-space representation of the observations and (ii) accurately infers the Lyapunov exponents and covariant Lyapunov vectors (CLVs) in this low-dimensional manifold for different attractors. Second, we extend the CAE-ESN to a turbulent flow, comparing the Lyapunov spectrum to estimates obtained from Jacobian-free methods. A latent-space approach based on the CAE-ESN effectively produces a latent space that preserves the key properties of the chaotic system, such as Lyapunov exponents and CLVs, thus retaining the geometric structure of the attractor. The latent-space approach based on the CAE-ESN is a reduced-order model that accurately predicts the dynamics of the chaotic system, or, alternatively, it can be used to infer stability properties of chaotic systems from data.
Abstract:In the current Noisy Intermediate Scale Quantum (NISQ) era, the presence of noise deteriorates the performance of quantum computing algorithms. Quantum Reservoir Computing (QRC) is a type of Quantum Machine Learning algorithm, which, however, can benefit from different types of tuned noise. In this paper, we analyse the effect that finite-sampling noise has on the chaotic time-series prediction capabilities of QRC and Recurrence-free Quantum Reservoir Computing (RF-QRC). First, we show that, even without a recurrent loop, RF-QRC contains temporal information about previous reservoir states using leaky integrated neurons. This makes RF-QRC different from Quantum Extreme Learning Machines (QELM). Second, we show that finite sampling noise degrades the prediction capabilities of both QRC and RF-QRC while affecting QRC more due to the propagation of noise. Third, we optimize the training of the finite-sampled quantum reservoir computing framework using two methods: (a) Singular Value Decomposition (SVD) applied to the data matrix containing noisy reservoir activation states; and (b) data-filtering techniques to remove the high-frequencies from the noisy reservoir activation states. We show that denoising reservoir activation states improve the signal-to-noise ratios with smaller training loss. Finally, we demonstrate that the training and denoising of the noisy reservoir activation signals in RF-QRC are highly parallelizable on multiple Quantum Processing Units (QPUs) as compared to the QRC architecture with recurrent connections. The analyses are numerically showcased on prototypical chaotic dynamical systems with relevance to turbulence. This work opens opportunities for using quantum reservoir computing with finite samples for time-series forecasting on near-term quantum hardware.
Abstract:Data from fluid flow measurements are typically sparse, noisy, and heterogeneous, often from mixed pressure and velocity measurements, resulting in incomplete datasets. In this paper, we develop a physics-constrained convolutional neural network, which is a deterministic tool, to reconstruct the full flow field from incomplete data. We explore three loss functions, both from machine learning literature and newly proposed: (i) the softly-constrained loss, which allows the prediction to take any value; (ii) the snapshot-enforced loss, which constrains the prediction at the sensor locations; and (iii) the mean-enforced loss, which constrains the mean of the prediction at the sensor locations. The proposed methods do not require the full flow field during training, making it suitable for reconstruction from incomplete data. We apply the method to reconstruct a laminar wake of a bluff body and a turbulent Kolmogorov flow. First, we assume that measurements are not noisy and reconstruct both the laminar wake and the Kolmogorov flow from sensors located at fewer than 1% of all grid points. The snapshot-enforced loss reduces the reconstruction error of the Kolmogorov flow by approximately 25% compared to the softly-constrained loss. Second, we assume that measurements are noisy and propose the mean-enforced loss to reconstruct the laminar wake and the Kolmogorov flow at three different signal-to-noise ratios. We find that, across the ratios tested, the loss functions with harder constraints are more robust to both the random initialization of the networks and the noise levels in the measurements. At high noise levels, the mean-enforced loss can recover the instantaneous snapshots accurately, making it the suitable choice when reconstructing flows from data corrupted with an unknown amount of noise. The proposed method opens opportunities for physical flow reconstruction from sparse, noisy data.
Abstract:Deep Learning (DL) models have been successfully applied to many applications including biomedical cell segmentation and classification in histological images. These models require large amounts of annotated data which might not always be available, especially in the medical field where annotations are scarce and expensive. To overcome this limitation, we propose a novel pipeline for generating synthetic datasets for cell segmentation. Given only a handful of annotated images, our method generates a large dataset of images which can be used to effectively train DL instance segmentation models. Our solution is designed to generate cells of realistic shapes and placement by allowing experts to incorporate domain knowledge during the generation of the dataset.
Abstract:We introduce a new family of minimal problems for reconstruction from multiple views. Our primary focus is a novel approach to autocalibration, a long-standing problem in computer vision. Traditional approaches to this problem, such as those based on Kruppa's equations or the modulus constraint, rely explicitly on the knowledge of multiple fundamental matrices or a projective reconstruction. In contrast, we consider a novel formulation involving constraints on image points, the unknown depths of 3D points, and a partially specified calibration matrix $K$. For $2$ and $3$ views, we present a comprehensive taxonomy of minimal autocalibration problems obtained by relaxing some of these constraints. These problems are organized into classes according to the number of views and any assumed prior knowledge of $K$. Within each class, we determine problems with the fewest -- or a relatively small number of -- solutions. From this zoo of problems, we devise three practical solvers. Experiments with synthetic and real data and interfacing our solvers with COLMAP demonstrate that we achieve superior accuracy compared to state-of-the-art calibration methods. The code is available at https://github.com/andreadalcin/MinimalPerspectiveAutocalibration
Abstract:Turbulent flows are chaotic and multi-scale dynamical systems, which have large numbers of degrees of freedom. Turbulent flows, however, can be modelled with a smaller number of degrees of freedom when using the appropriate coordinate system, which is the goal of dimensionality reduction via nonlinear autoencoders. Autoencoders are expressive tools, but they are difficult to interpret. The goal of this paper is to propose a method to aid the interpretability of autoencoders. This is the decoder decomposition. First, we propose the decoder decomposition, which is a post-processing method to connect the latent variables to the coherent structures of flows. Second, we apply the decoder decomposition to analyse the latent space of synthetic data of a two-dimensional unsteady wake past a cylinder. We find that the dimension of latent space has a significant impact on the interpretability of autoencoders. We identify the physical and spurious latent variables. Third, we apply the decoder decomposition to the latent space of wind-tunnel experimental data of a three-dimensional turbulent wake past a bluff body. We show that the reconstruction error is a function of both the latent space dimension and the decoder size, which are correlated. Finally, we apply the decoder decomposition to rank and select latent variables based on the coherent structures that they represent. This is useful to filter unwanted or spurious latent variables, or to pinpoint specific coherent structures of interest. The ability to rank and select latent variables will help users design and interpret nonlinear autoencoders.
Abstract:In one calculation, adjoint sensitivity analysis provides the gradient of a quantity of interest with respect to all system's parameters. Conventionally, adjoint solvers need to be implemented by differentiating computational models, which can be a cumbersome task and is code-specific. To propose an adjoint solver that is not code-specific, we develop a data-driven strategy. We demonstrate its application on the computation of gradients of long-time averages of chaotic flows. First, we deploy a parameter-aware echo state network (ESN) to accurately forecast and simulate the dynamics of a dynamical system for a range of system's parameters. Second, we derive the adjoint of the parameter-aware ESN. Finally, we combine the parameter-aware ESN with its adjoint version to compute the sensitivities to the system parameters. We showcase the method on a prototypical chaotic system. Because adjoint sensitivities in chaotic regimes diverge for long integration times, we analyse the application of ensemble adjoint method to the ESN. We find that the adjoint sensitivities obtained from the ESN match closely with the original system. This work opens possibilities for sensitivity analysis without code-specific adjoint solvers.
Abstract:Computing distances on Riemannian manifolds is a challenging problem with numerous applications, from physics, through statistics, to machine learning. In this paper, we introduce the metric-constrained Eikonal solver to obtain continuous, differentiable representations of distance functions on manifolds. The differentiable nature of these representations allows for the direct computation of globally length-minimising paths on the manifold. We showcase the use of metric-constrained Eikonal solvers for a range of manifolds and demonstrate the applications. First, we demonstrate that metric-constrained Eikonal solvers can be used to obtain the Fr\'echet mean on a manifold, employing the definition of a Gaussian mixture model, which has an analytical solution to verify the numerical results. Second, we demonstrate how the obtained distance function can be used to conduct unsupervised clustering on the manifold -- a task for which existing approaches are computationally prohibitive. This work opens opportunities for distance computations on manifolds.