Seismic data interpolation plays a crucial role in subsurface imaging, enabling accurate analysis and interpretation throughout the seismic processing workflow. Despite the widespread exploration of deep supervised learning methods for seismic data reconstruction, several challenges still remain open. Particularly, the requirement of extensive training data and poor domain generalization due to the seismic survey's variability poses significant issues. To overcome these limitations, this paper introduces a deep-learning-based seismic data reconstruction approach that leverages data redundancy. This method involves a two-stage training process. First, an adversarial generative network (GAN) is trained using synthetic seismic data, enabling the extraction and learning of their primary and local seismic characteristics. Second, a reconstruction network is trained with synthetic data generated by the GAN, which dynamically adjusts the noise and distortion level at each epoch to promote feature diversity. This approach enhances the generalization capabilities of the reconstruction network by allowing control over the generation of seismic patterns from the latent space of the GAN, thereby reducing the dependency on large seismic databases. Experimental results on field and synthetic seismic datasets both pre-stack and post-stack show that the proposed method outperforms the baseline supervised learning and unsupervised approaches such as deep seismic prior and internal learning, by up to 8 dB of PSNR.