CRAL
Abstract:Initially designed to detect and characterize exoplanets, extreme adaptive optics systems (AO) open a new window on the solar system by resolving its small bodies. Nonetheless, despite the always increasing performances of AO systems, the correction is not perfect, degrading their image and producing a bright halo that can hide faint and close moons. Using a reference point spread function (PSF) is not always sufficient due to the random nature of the turbulence. In this work, we present our method to overcome this limitation. It blindly reconstructs the AO-PSF directly in the data of interest, without any prior on the instrument nor the asteroid's shape. This is done by first estimating the PSF core parameters under the assumption of a sharp-edge and flat object, allowing the image of the main body to be deconvolved. Then, the PSF faint extensions are reconstructed with a robust penalization optimization, discarding outliers on-the-fly such as cosmic rays, defective pixels and moons. This allows to properly model and remove the asteroid's halo. Finally, moons can be detected in the residuals, using the reconstructed PSF and the knowledge of the outliers learned with the robust method. We show that our method can be easily applied to different instruments (VLT/SPHERE, Keck/NIRC2), efficiently retrieving the features of AO-PSFs. Compared with state-of-the-art moon enhancement algorithms, moon signal is greatly improved and our robust detection method manages to discriminate faint moons from outliers.
Abstract:Exoplanet detection by direct imaging is a difficult task: the faint signals from the objects of interest are buried under a spatially structured nuisance component induced by the host star. The exoplanet signals can only be identified when combining several observations with dedicated detection algorithms. In contrast to most of existing methods, we propose to learn a model of the spatial, temporal and spectral characteristics of the nuisance, directly from the observations. In a pre-processing step, a statistical model of their correlations is built locally, and the data are centered and whitened to improve both their stationarity and signal-to-noise ratio (SNR). A convolutional neural network (CNN) is then trained in a supervised fashion to detect the residual signature of synthetic sources in the pre-processed images. Our method leads to a better trade-off between precision and recall than standard approaches in the field. It also outperforms a state-of-the-art algorithm based solely on a statistical framework. Besides, the exploitation of the spectral diversity improves the performance compared to a similar model built solely from spatio-temporal data.