Abstract:We propose a novel finite element-based physics-informed operator learning framework that allows for predicting spatiotemporal dynamics governed by partial differential equations (PDEs). The proposed framework employs a loss function inspired by the finite element method (FEM) with the implicit Euler time integration scheme. A transient thermal conduction problem is considered to benchmark the performance. The proposed operator learning framework takes a temperature field at the current time step as input and predicts a temperature field at the next time step. The Galerkin discretized weak formulation of the heat equation is employed to incorporate physics into the loss function, which is coined finite operator learning (FOL). Upon training, the networks successfully predict the temperature evolution over time for any initial temperature field at high accuracy compared to the FEM solution. The framework is also confirmed to be applicable to a heterogeneous thermal conductivity and arbitrary geometry. The advantages of FOL can be summarized as follows: First, the training is performed in an unsupervised manner, avoiding the need for a large data set prepared from costly simulations or experiments. Instead, random temperature patterns generated by the Gaussian random process and the Fourier series, combined with constant temperature fields, are used as training data to cover possible temperature cases. Second, shape functions and backward difference approximation are exploited for the domain discretization, resulting in a purely algebraic equation. This enhances training efficiency, as one avoids time-consuming automatic differentiation when optimizing weights and biases while accepting possible discretization errors. Finally, thanks to the interpolation power of FEM, any arbitrary geometry can be handled with FOL, which is crucial to addressing various engineering application scenarios.
Abstract:To develop faster solvers for governing physical equations in solid mechanics, we introduce a method that parametrically learns the solution to mechanical equilibrium. The introduced method outperforms traditional ones in terms of computational cost while acceptably maintaining accuracy. Moreover, it generalizes and enhances the standard physics-informed neural networks to learn a parametric solution with rather sharp discontinuities. We focus on micromechanics as an example, where the knowledge of the micro-mechanical solution, i.e., deformation and stress fields for a given heterogeneous microstructure, is crucial. The parameter under investigation is the Young modulus distribution within the heterogeneous solid system. Our method, inspired by operator learning and the finite element method, demonstrates the ability to train without relying on data from other numerical solvers. Instead, we leverage ideas from the finite element approach to efficiently set up loss functions algebraically, particularly based on the discretized weak form of the governing equations. Notably, our investigations reveal that physics-based training yields higher accuracy compared to purely data-driven approaches for unseen microstructures. In essence, this method achieves independence from data and enhances accuracy for predictions beyond the training range. The aforementioned observations apply here to heterogeneous elastic microstructures. Comparisons are also made with other well-known operator learning algorithms, such as DeepOnet, to further emphasize the advantages of the newly proposed architecture.
Abstract:We applied physics-informed neural networks to solve the constitutive relations for nonlinear, path-dependent material behavior. As a result, the trained network not only satisfies all thermodynamic constraints but also instantly provides information about the current material state (i.e., free energy, stress, and the evolution of internal variables) under any given loading scenario without requiring initial data. One advantage of this work is that it bypasses the repetitive Newton iterations needed to solve nonlinear equations in complex material models. Additionally, strategies are provided to reduce the required order of derivation for obtaining the tangent operator. The trained model can be directly used in any finite element package (or other numerical methods) as a user-defined material model. However, challenges remain in the proper definition of collocation points and in integrating several non-equality constraints that become active or non-active simultaneously. We tested this methodology on rate-independent processes such as the classical von Mises plasticity model with a nonlinear hardening law, as well as local damage models for interface cracking behavior with a nonlinear softening law. Finally, we discuss the potential and remaining challenges for future developments of this new approach.
Abstract:Deep learning methods find a solution to a boundary value problem by defining loss functions of neural networks based on governing equations, boundary conditions, and initial conditions. Furthermore, the authors show that when it comes to many engineering problems, designing the loss functions based on first-order derivatives results in much better accuracy, especially when there is heterogeneity and variable jumps in the domain \cite{REZAEI2022PINN}. The so-called mixed formulation for PINN is applied to basic engineering problems such as the balance of linear momentum and diffusion problems. In this work, the proposed mixed formulation is further extended to solve multi-physical problems. In particular, we focus on a stationary thermo-mechanically coupled system of equations that can be utilized in designing the microstructure of advanced materials. First, sequential unsupervised training, and second, fully coupled unsupervised learning are discussed. The results of each approach are compared in terms of accuracy and corresponding computational cost. Finally, the idea of transfer learning is employed by combining data and physics to address the capability of the network to predict the response of the system for unseen cases. The outcome of this work will be useful for many other engineering applications where DL is employed on multiple coupled systems of equations.
Abstract:Physics-informed neural networks (PINNs) are capable of finding the solution for a given boundary value problem. We employ several ideas from the finite element method (FEM) to enhance the performance of existing PINNs in engineering problems. The main contribution of the current work is to promote using the spatial gradient of the primary variable as an output from separated neural networks. Later on, the strong form which has a higher order of derivatives is applied to the spatial gradients of the primary variable as the physical constraint. In addition, the so-called energy form of the problem is applied to the primary variable as an additional constraint for training. The proposed approach only required up to first-order derivatives to construct the physical loss functions. We discuss why this point is beneficial through various comparisons between different models. The mixed formulation-based PINNs and FE methods share some similarities. While the former minimizes the PDE and its energy form at given collocation points utilizing a complex nonlinear interpolation through a neural network, the latter does the same at element nodes with the help of shape functions. We focus on heterogeneous solids to show the capability of deep learning for predicting the solution in a complex environment under different boundary conditions. The performance of the proposed PINN model is checked against the solution from FEM on two prototype problems: elasticity and the Poisson equation (steady-state diffusion problem). We concluded that by properly designing the network architecture in PINN, the deep learning model has the potential to solve the unknowns in a heterogeneous domain without any available initial data from other sources. Finally, discussions are provided on the combination of PINN and FEM for a fast and accurate design of composite materials in future developments.