Abstract:Physics-Informed Neural Networks (PINNs) have gained increasing attention for solving partial differential equations, including the Helmholtz equation, due to their flexibility and mesh-free formulation. However, their low-frequency bias limits their accuracy and convergence speed for high-frequency wavefield simulations. To alleviate these problems, we propose a simplified PINN framework that incorporates Gabor functions, designed to capture the oscillatory and localized nature of wavefields more effectively. Unlike previous attempts that rely on auxiliary networks to learn Gabor parameters, we redefine the network's task to map input coordinates to a custom Gabor coordinate system, simplifying the training process without increasing the number of trainable parameters compared to a simple PINN. We validate the proposed method across multiple velocity models, including the complex Marmousi and Overthrust models, and demonstrate its superior accuracy, faster convergence, and better robustness features compared to both traditional PINNs and earlier Gabor-based PINNs. Additionally, we propose an efficient integration of a Perfectly Matched Layer (PML) to enhance wavefield behavior near the boundaries. These results suggest that our approach offers an efficient and accurate alternative for scattered wavefield modeling and lays the groundwork for future improvements in PINN-based seismic applications.
Abstract:Seismic data often face challenges in their utilization due to noise contamination, incomplete acquisition, and limited low-frequency information, which hinder accurate subsurface imaging and interpretation. Traditional processing methods rely heavily on task-specific designs to address these challenges and fail to account for the variability of data. To address these limitations, we present a generative seismic foundation model (GSFM), a unified framework based on generative diffusion models (GDMs), designed to tackle multi-task seismic processing challenges, including denoising, backscattered noise attenuation, interpolation, and low-frequency extrapolation. GSFM leverages a pre-training stage on synthetic data to capture the features of clean, complete, and broadband seismic data distributions and applies an iterative fine-tuning strategy to adapt the model to field data. By adopting a target-oriented diffusion process prediction, GSFM improves computational efficiency without compromising accuracy. Synthetic data tests demonstrate GSFM surpasses benchmarks with equivalent architectures in all tasks and achieves performance comparable to traditional pre-training strategies, even after their fine-tuning. Also, field data tests suggest that our iterative fine-tuning approach addresses the generalization limitations of conventional pre-training and fine-tuning paradigms, delivering significantly enhanced performance across diverse tasks. Furthermore, GSFM's inherent probabilistic nature enables effective uncertainty quantification, offering valuable insights into the reliability of processing results.
Abstract:Physics-informed neural networks (PINNs) face significant challenges in modeling multi-frequency wavefields in complex velocity models due to their slow convergence, difficulty in representing high-frequency details, and lack of generalization to varying frequencies and velocity scenarios. To address these issues, we propose Meta-LRPINN, a novel framework that combines low-rank parameterization using singular value decomposition (SVD) with meta-learning and frequency embedding. Specifically, we decompose the weights of PINN's hidden layers using SVD and introduce an innovative frequency embedding hypernetwork (FEH) that links input frequencies with the singular values, enabling efficient and frequency-adaptive wavefield representation. Meta-learning is employed to provide robust initialization, improving optimization stability and reducing training time. Additionally, we implement adaptive rank reduction and FEH pruning during the meta-testing phase to further enhance efficiency. Numerical experiments, which are presented on multi-frequency scattered wavefields for different velocity models, demonstrate that Meta-LRPINN achieves much fast convergence speed and much high accuracy compared to baseline methods such as Meta-PINN and vanilla PINN. Also, the proposed framework shows strong generalization to out-of-distribution frequencies while maintaining computational efficiency. These results highlight the potential of our Meta-LRPINN for scalable and adaptable seismic wavefield modeling.
Abstract:Full waveform inversion (FWI) often faces challenges due to inadequate seismic observations, resulting in band-limited and geologically inaccurate inversion results. Incorporating prior information from potential velocity distributions, well-log information, and our geological knowledge and expectations can significantly improve FWI convergence to a realistic model. While diffusion-regularized FWI has shown improved performance compared to conventional FWI by incorporating the velocity distribution prior, it can benefit even more by incorporating well-log information and other geological knowledge priors. To leverage this fact, we propose a geological class and well-information prior-assisted FWI using conditional diffusion models. This method seamlessly integrates multi-modal information into FWI, simultaneously achieving data fitting and universal geologic and geophysics prior matching, which is often not achieved with traditional regularization methods. Specifically, we propose to combine conditional diffusion models with FWI, where we integrate well-log data and geological class conditions into these conditional diffusion models using classifier-free guidance for multi-modal prior matching beyond the original velocity distribution prior. Numerical experiments on the OpenFWI datasets and field marine data demonstrate the effectiveness of our method compared to conventional FWI and the unconditional diffusion-regularized FWI.
Abstract:Building subsurface velocity models is essential to our goals in utilizing seismic data for Earth discovery and exploration, as well as monitoring. With the dawn of machine learning, these velocity models (or, more precisely, their distribution) can be stored accurately and efficiently in a generative model. These stored velocity model distributions can be utilized to regularize or quantify uncertainties in inverse problems, like full waveform inversion. However, most generators, like normalizing flows or diffusion models, treat the image (velocity model) uniformly, disregarding spatial dependencies and resolution changes with respect to the observation locations. To address this weakness, we introduce VelocityGPT, a novel implementation that utilizes Transformer decoders trained autoregressively to generate a velocity model from shallow subsurface to deep. Owing to the fact that seismic data are often recorded on the Earth's surface, a top-down generator can utilize the inverted information in the shallow as guidance (prior) to generating the deep. To facilitate the implementation, we use an additional network to compress the velocity model. We also inject prior information, like well or structure (represented by a migration image) to generate the velocity model. Using synthetic data, we demonstrate the effectiveness of VelocityGPT as a promising approach in generative model applications for seismic velocity model building.
Abstract:Carbon capture and storage (CCS) plays a crucial role in mitigating greenhouse gas emissions, particularly from industrial outputs. Using seismic monitoring can aid in an accurate and robust monitoring system to ensure the effectiveness of CCS and mitigate associated risks. However, conventional seismic wave equation-based approaches are computationally demanding, which hinders real-time applications. In addition to efficiency, forecasting and uncertainty analysis are not easy to handle using such numerical-simulation-based approaches. To this end, we propose a novel subsurface multiphysics monitoring and forecasting framework utilizing video diffusion models. This approach can generate high-quality representations of CO$2$ evolution and associated changes in subsurface elastic properties. With reconstruction guidance, forecasting and inversion can be achieved conditioned on historical frames and/or observational data. Meanwhile, due to the generative nature of the approach, we can quantify uncertainty in the prediction. Tests based on the Compass model show that the proposed method successfully captured the inherently complex physical phenomena associated with CO$_2$ monitoring, and it can predict and invert the subsurface elastic properties and CO$_2$ saturation with consistency in their evolution.
Abstract:Accurate seismic velocity estimations are vital to understanding Earth's subsurface structures, assessing natural resources, and evaluating seismic hazards. Machine learning-based inversion algorithms have shown promising performance in regional (i.e., for exploration) and global velocity estimation, while their effectiveness hinges on access to large and diverse training datasets whose distributions generally cover the target solutions. Additionally, enhancing the precision and reliability of velocity estimation also requires incorporating prior information, e.g., geological classes, well logs, and subsurface structures, but current statistical or neural network-based methods are not flexible enough to handle such multi-modal information. To address both challenges, we propose to use conditional generative diffusion models for seismic velocity synthesis, in which we readily incorporate those priors. This approach enables the generation of seismic velocities that closely match the expected target distribution, offering datasets informed by both expert knowledge and measured data to support training for data-driven geophysical methods. We demonstrate the flexibility and effectiveness of our method through training diffusion models on the OpenFWI dataset under various conditions, including class labels, well logs, reflectivity images, as well as the combination of these priors. The performance of the approach under out-of-distribution conditions further underscores its generalization ability, showcasing its potential to provide tailored priors for velocity inverse problems and create specific training datasets for machine learning-based geophysical applications.
Abstract:Recently, Physics-Informed Neural Networks (PINNs) have gained significant attention for their versatile interpolation capabilities in solving partial differential equations (PDEs). Despite their potential, the training can be computationally demanding, especially for intricate functions like wavefields. This is primarily due to the neural-based (learned) basis functions, biased toward low frequencies, as they are dominated by polynomial calculations, which are not inherently wavefield-friendly. In response, we propose an approach to enhance the efficiency and accuracy of neural network wavefield solutions by modeling them as linear combinations of Gabor basis functions that satisfy the wave equation. Specifically, for the Helmholtz equation, we augment the fully connected neural network model with an adaptable Gabor layer constituting the final hidden layer, employing a weighted summation of these Gabor neurons to compute the predictions (output). These weights/coefficients of the Gabor functions are learned from the previous hidden layers that include nonlinear activation functions. To ensure the Gabor layer's utilization across the model space, we incorporate a smaller auxiliary network to forecast the center of each Gabor function based on input coordinates. Realistic assessments showcase the efficacy of this novel implementation compared to the vanilla PINN, particularly in scenarios involving high-frequencies and realistic models that are often challenging for PINNs.
Abstract:The computation of the seismic wavefield by solving the Helmholtz equation is crucial to many practical applications, e.g., full waveform inversion. Physics-informed neural networks (PINNs) provide functional wavefield solutions represented by neural networks (NNs), but their convergence is slow. To address this problem, we propose a modified PINN using multiplicative filtered networks, which embeds some of the known characteristics of the wavefield in training, e.g., frequency, to achieve much faster convergence. Specifically, we use the Gabor basis function due to its proven ability to represent wavefields accurately and refer to the implementation as GaborPINN. Meanwhile, we incorporate prior information on the frequency of the wavefield into the design of the method to mitigate the influence of the discontinuity of the represented wavefield by GaborPINN. The proposed method achieves up to a two-magnitude increase in the speed of convergence as compared with conventional PINNs.
Abstract:StorSeismic is a recently introduced model based on the Transformer to adapt to various seismic processing tasks through its pretraining and fine-tuning training strategy. In the original implementation, StorSeismic utilized a sinusoidal positional encoding and a conventional self-attention mechanism, both borrowed from the natural language processing (NLP) applications. For seismic processing they admitted good results, but also hinted to limitations in efficiency and expressiveness. We propose modifications to these two key components, by utilizing relative positional encoding and low-rank attention matrices as replacements to the vanilla ones. The proposed changes are tested on processing tasks applied to a realistic Marmousi and offshore field data as a sequential strategy, starting from denoising, direct arrival removal, multiple attenuation, and finally root-mean-squared velocity ($V_{RMS}$) prediction for normal moveout (NMO) correction. We observe faster pretraining and competitive results on the fine-tuning tasks and, additionally, fewer parameters to train compared to the vanilla model.