Abstract:Building subsurface velocity models is essential to our goals in utilizing seismic data for Earth discovery and exploration, as well as monitoring. With the dawn of machine learning, these velocity models (or, more precisely, their distribution) can be stored accurately and efficiently in a generative model. These stored velocity model distributions can be utilized to regularize or quantify uncertainties in inverse problems, like full waveform inversion. However, most generators, like normalizing flows or diffusion models, treat the image (velocity model) uniformly, disregarding spatial dependencies and resolution changes with respect to the observation locations. To address this weakness, we introduce VelocityGPT, a novel implementation that utilizes Transformer decoders trained autoregressively to generate a velocity model from shallow subsurface to deep. Owing to the fact that seismic data are often recorded on the Earth's surface, a top-down generator can utilize the inverted information in the shallow as guidance (prior) to generating the deep. To facilitate the implementation, we use an additional network to compress the velocity model. We also inject prior information, like well or structure (represented by a migration image) to generate the velocity model. Using synthetic data, we demonstrate the effectiveness of VelocityGPT as a promising approach in generative model applications for seismic velocity model building.
Abstract:StorSeismic is a recently introduced model based on the Transformer to adapt to various seismic processing tasks through its pretraining and fine-tuning training strategy. In the original implementation, StorSeismic utilized a sinusoidal positional encoding and a conventional self-attention mechanism, both borrowed from the natural language processing (NLP) applications. For seismic processing they admitted good results, but also hinted to limitations in efficiency and expressiveness. We propose modifications to these two key components, by utilizing relative positional encoding and low-rank attention matrices as replacements to the vanilla ones. The proposed changes are tested on processing tasks applied to a realistic Marmousi and offshore field data as a sequential strategy, starting from denoising, direct arrival removal, multiple attenuation, and finally root-mean-squared velocity ($V_{RMS}$) prediction for normal moveout (NMO) correction. We observe faster pretraining and competitive results on the fine-tuning tasks and, additionally, fewer parameters to train compared to the vanilla model.
Abstract:Machine learned tasks on seismic data are often trained sequentially and separately, even though they utilize the same features (i.e. geometrical) of the data. We present StorSeismic, as a framework for seismic data processing, which consists of neural network pre-training and fine-tuning procedures. We, specifically, utilize a neural network as a preprocessing model to store seismic data features of a particular dataset for any downstream tasks. After pre-training, the resulting model can be utilized later, through a fine-tuning procedure, to perform tasks using limited additional training. Used often in Natural Language Processing (NLP) and lately in vision tasks, BERT (Bidirectional Encoder Representations from Transformer), a form of a Transformer model, provides an optimal platform for this framework. The attention mechanism of BERT, applied here on a sequence of traces within the shot gather, is able to capture and store key geometrical features of the seismic data. We pre-train StorSeismic on field data, along with synthetically generated ones, in the self-supervised step. Then, we use the labeled synthetic data to fine-tune the pre-trained network in a supervised fashion to perform various seismic processing tasks, like denoising, velocity estimation, first arrival picking, and NMO. Finally, the fine-tuned model is used to obtain satisfactory inference results on the field data.