Abstract:We present a novel reduced-order Model (ROM) that leverages optimal transport (OT) theory and displacement interpolation to enhance the representation of nonlinear dynamics in complex systems. While traditional ROM techniques face challenges in this scenario, especially when data (i.e., observational snapshots) is limited, our method addresses these issues by introducing a data augmentation strategy based on OT principles. The proposed framework generates interpolated solutions tracing geodesic paths in the space of probability distributions, enriching the training dataset for the ROM. A key feature of our approach is its ability to provide a continuous representation of the solution's dynamics by exploiting a virtual-to-real time mapping. This enables the reconstruction of solutions at finer temporal scales than those provided by the original data. To further improve prediction accuracy, we employ Gaussian Process Regression to learn the residual and correct the representation between the interpolated snapshots and the physical solution. We demonstrate the effectiveness of our methodology with atmospheric mesoscale benchmarks characterized by highly nonlinear, advection-dominated dynamics. Our results show improved accuracy and efficiency in predicting complex system behaviors, indicating the potential of this approach for a wide range of applications in computational physics and engineering.
Abstract:Reduced order models (ROMs) are widely used in scientific computing to tackle high-dimensional systems. However, traditional ROM methods may only partially capture the intrinsic geometric characteristics of the data. These characteristics encompass the underlying structure, relationships, and essential features crucial for accurate modeling. To overcome this limitation, we propose a novel ROM framework that integrates optimal transport (OT) theory and neural network-based methods. Specifically, we investigate the Kernel Proper Orthogonal Decomposition (kPOD) method exploiting the Wasserstein distance as the custom kernel, and we efficiently train the resulting neural network (NN) employing the Sinkhorn algorithm. By leveraging an OT-based nonlinear reduction, the presented framework can capture the geometric structure of the data, which is crucial for accurate learning of the reduced solution manifold. When compared with traditional metrics such as mean squared error or cross-entropy, exploiting the Sinkhorn divergence as the loss function enhances stability during training, robustness against overfitting and noise, and accelerates convergence. To showcase the approach's effectiveness, we conduct experiments on a set of challenging test cases exhibiting a slow decay of the Kolmogorov n-width. The results show that our framework outperforms traditional ROM methods in terms of accuracy and computational efficiency.