Abstract:Strong gravitational lensing is a powerful tool for investigating dark matter and dark energy properties. With the advent of large-scale sky surveys, we can discover strong lensing systems on an unprecedented scale, which requires efficient tools to extract them from billions of astronomical objects. The existing mainstream lens-finding tools are based on machine learning algorithms and applied to cut-out-centered galaxies. However, according to the design and survey strategy of optical surveys by CSST, preparing cutouts with multiple bands requires considerable efforts. To overcome these challenges, we have developed a framework based on a hierarchical visual Transformer with a sliding window technique to search for strong lensing systems within entire images. Moreover, given that multi-color images of strong lensing systems can provide insights into their physical characteristics, our framework is specifically crafted to identify strong lensing systems in images with any number of channels. As evaluated using CSST mock data based on an Semi-Analytic Model named CosmoDC2, our framework achieves precision and recall rates of 0.98 and 0.90, respectively. To evaluate the effectiveness of our method in real observations, we have applied it to a subset of images from the DESI Legacy Imaging Surveys and media images from Euclid Early Release Observations. 61 new strong lensing system candidates are discovered by our method. However, we also identified false positives arising primarily from the simplified galaxy morphology assumptions within the simulation. This underscores the practical limitations of our approach while simultaneously highlighting potential avenues for future improvements.
Abstract:In the imaging process of an astronomical telescope, the deconvolution of its beam or Point Spread Function (PSF) is a crucial task. However, deconvolution presents a classical and challenging inverse computation problem. In scenarios where the beam or PSF is complex or inaccurately measured, such as in interferometric arrays and certain radio telescopes, the resultant blurry images are often challenging to interpret visually or analyze using traditional physical detection methods. We argue that traditional methods frequently lack specific prior knowledge, thereby leading to suboptimal performance. To address this issue and achieve image deconvolution and reconstruction, we propose an unsupervised network architecture that incorporates prior physical information. The network adopts an encoder-decoder structure while leveraging the telescope's PSF as prior knowledge. During network training, we introduced accelerated Fast Fourier Transform (FFT) convolution to enable efficient processing of high-resolution input images and PSFs. We explored various classic regression networks, including autoencoder (AE) and U-Net, and conducted a comprehensive performance evaluation through comparative analysis.