Abstract:This work presents a data-driven solution to accurately predict parameterized nonlinear fluid dynamical systems using a dynamics-generator conditional GAN (Dyn-cGAN) as a surrogate model. The Dyn-cGAN includes a dynamics block within a modified conditional GAN, enabling the simultaneous identification of temporal dynamics and their dependence on system parameters. The learned Dyn-cGAN model takes into account the system parameters to predict the flow fields of the system accurately. We evaluate the effectiveness and limitations of the developed Dyn-cGAN through numerical studies of various parameterized nonlinear fluid dynamical systems, including flow over a cylinder and a 2-D cavity problem, with different Reynolds numbers. Furthermore, we examine how Reynolds number affects the accuracy of the predictions for both case studies. Additionally, we investigate the impact of the number of time steps involved in the process of dynamics block training on the accuracy of predictions, and we find that an optimal value exists based on errors and mutual information relative to the ground truth.
Abstract:This paper develops a deep learning framework based on convolutional neural networks (CNNs) that enable real-time extraction of full-field subpixel structural displacements from videos. In particular, two new CNN architectures are designed and trained on a dataset generated by the phase-based motion extraction method from a single lab-recorded high-speed video of a dynamic structure. As displacement is only reliable in the regions with sufficient texture contrast, the sparsity of motion field induced by the texture mask is considered via the network architecture design and loss function definition. Results show that, with the supervision of full and sparse motion field, the trained network is capable of identifying the pixels with sufficient texture contrast as well as their subpixel motions. The performance of the trained networks is tested on various videos of other structures to extract the full-field motion (e.g., displacement time histories), which indicates that the trained networks have generalizability to accurately extract full-field subtle displacements for pixels with sufficient texture contrast.