Abstract:Reduced Order Modelling (ROM) has been widely used to create lower order, computationally inexpensive representations of higher-order dynamical systems. Using these representations, ROMs can efficiently model flow fields while using significantly lesser parameters. Conventional ROMs accomplish this by linearly projecting higher-order manifolds to lower-dimensional space using dimensionality reduction techniques such as Proper Orthogonal Decomposition (POD). In this work, we develop a novel deep learning framework DL-ROM (Deep Learning - Reduced Order Modelling) to create a neural network capable of non-linear projections to reduced order states. We then use the learned reduced state to efficiently predict future time steps of the simulation using 3D Autoencoder and 3D U-Net based architectures. Our model DL-ROM is able to create highly accurate reconstructions from the learned ROM and is thus able to efficiently predict future time steps by temporally traversing in the learned reduced state. All of this is achieved without ground truth supervision or needing to iteratively solve the expensive Navier-Stokes(NS) equations thereby resulting in massive computational savings. To test the effectiveness and performance of our approach, we evaluate our implementation on five different Computational Fluid Dynamics (CFD) datasets using reconstruction performance and computational runtime metrics. DL-ROM can reduce the computational runtimes of iterative solvers by nearly two orders of magnitude while maintaining an acceptable error threshold.
Abstract:The importance of state estimation in fluid mechanics is well-established; it is required for accomplishing several tasks including design/optimization, active control, and future state prediction. A common tactic in this regards is to rely on reduced order models. Such approaches, in general, use measurement data of one-time instance. However, oftentimes data available from sensors is sequential and ignoring it results in information loss. In this paper, we propose a novel deep learning based state estimation framework that learns from sequential data. The proposed model structure consists of the recurrent cell to pass information from different time steps enabling utilization of this information to recover the full state. We illustrate that utilizing sequential data allows for state recovery from only one or two sensors. For efficient recovery of the state, the proposed approached is coupled with an auto-encoder based reduced order model. We illustrate the performance of the proposed approach using two examples and it is found to outperform other alternatives existing in the literature.