Abstract:With the expanding application scope of unmanned aerial vehicles (UAVs), the demand for stable UAV control has significantly increased. However, in complex environments, GPS signals are prone to interference, resulting in ineffective UAV positioning. Therefore, self-positioning of UAVs in GPS-denied environments has become a critical objective. Some methods obtain geolocation information in GPS-denied environments by matching ground objects in the UAV viewpoint with remote sensing images. However, most of these methods only provide coarse-level positioning, which satisfies cross-view geo-localization but cannot support precise UAV positioning tasks. Consequently, this paper focuses on a newer and more challenging task: precise UAV self-positioning based on remote sensing images. This approach not only considers the features of ground objects but also accounts for the spatial distribution of objects in the images. To address this challenge, we present a deep learning framework with geographic information adaptive loss, which achieves precise localization by aligning UAV images with corresponding satellite imagery in fine detail through the integration of geographic information from multiple perspectives. To validate the effectiveness of the proposed method, we conducted a series of experiments. The results demonstrate the method's efficacy in enabling UAVs to achieve precise self-positioning using remote sensing imagery.