Abstract:Coverage optimization generally involves deploying a set of facilities (e.g., sensors) to best satisfy the demands of specified points, with wide applications in fields such as location science and sensor networks. In practical applications, coverage optimization focuses on target coverage, which is typically formulated as Mixed-Variable Optimization Problems (MVOPs) due to complex real-world constraints. Meanwhile, high-fidelity discretization and visibility analysis may bring additional calculations, which significantly increases the computational cost. These factors pose significant challenges for fitness evaluations (FEs) in canonical Evolutionary Algorithms (EAs), and evolve the coverage problem into an Expensive Mixed-Variable Optimization Problem (EMVOP). To address these issues, we propose the RankNet-Inspired Surrogate-assisted Hybrid Metaheuristic (RI-SHM), an extension of our previous work. RI-SHM integrates three key components: (1) a RankNet-based pairwise global surrogate that innovatively predicts rankings between pairs of individuals, bypassing the challenges of fitness estimation in discontinuous solution space; (2) a surrogate-assisted local Estimation of Distribution Algorithm (EDA) that enhances local exploitation and helps escape from local optima; and (3) a fitness diversity-driven switching strategy that dynamically balances exploration and exploitation. Experiments demonstrate that our algorithm can effectively handle large-scale coverage optimization tasks of up to 300 dimensions and more than 1,800 targets within desirable runtime. Compared to state-of-the-art algorithms for EMVOPs, RI-SHM consistently outperforms them by up to 56.5$\%$ across all tested instances.
Abstract:Matching visible and near-infrared (NIR) images remains a significant challenge in remote sensing image fusion. The nonlinear radiometric differences between heterogeneous remote sensing images make the image matching task even more difficult. Deep learning has gained substantial attention in computer vision tasks in recent years. However, many methods rely on supervised learning and necessitate large amounts of annotated data. Nevertheless, annotated data is frequently limited in the field of remote sensing image matching. To address this challenge, this paper proposes a novel keypoint descriptor approach that obtains robust feature descriptors via a self-supervised matching network. A light-weight transformer network, termed as LTFormer, is designed to generate deep-level feature descriptors. Furthermore, we implement an innovative triplet loss function, LT Loss, to enhance the matching performance further. Our approach outperforms conventional hand-crafted local feature descriptors and proves equally competitive compared to state-of-the-art deep learning-based methods, even amidst the shortage of annotated data.