Abstract:Quantum nonlocality describes a stronger form of quantum correlation than that of entanglement. It refutes Einstein's belief of local realism and is among the most distinctive and enigmatic features of quantum mechanics. It is a crucial resource for achieving quantum advantages in a variety of practical applications, ranging from cryptography and certified random number generation via self-testing to machine learning. Nevertheless, the detection of nonlocality, especially in quantum many-body systems, is notoriously challenging. Here, we report an experimental certification of genuine multipartite Bell correlations, which signal nonlocality in quantum many-body systems, up to 24 qubits with a fully programmable superconducting quantum processor. In particular, we employ energy as a Bell correlation witness and variationally decrease the energy of a many-body system across a hierarchy of thresholds, below which an increasing Bell correlation depth can be certified from experimental data. As an illustrating example, we variationally prepare the low-energy state of a two-dimensional honeycomb model with 73 qubits and certify its Bell correlations by measuring an energy that surpasses the corresponding classical bound with up to 48 standard deviations. In addition, we variationally prepare a sequence of low-energy states and certify their genuine multipartite Bell correlations up to 24 qubits via energies measured efficiently by parity oscillation and multiple quantum coherence techniques. Our results establish a viable approach for preparing and certifying multipartite Bell correlations, which provide not only a finer benchmark beyond entanglement for quantum devices, but also a valuable guide towards exploiting multipartite Bell correlation in a wide spectrum of practical applications.
Abstract:Matching visible and near-infrared (NIR) images remains a significant challenge in remote sensing image fusion. The nonlinear radiometric differences between heterogeneous remote sensing images make the image matching task even more difficult. Deep learning has gained substantial attention in computer vision tasks in recent years. However, many methods rely on supervised learning and necessitate large amounts of annotated data. Nevertheless, annotated data is frequently limited in the field of remote sensing image matching. To address this challenge, this paper proposes a novel keypoint descriptor approach that obtains robust feature descriptors via a self-supervised matching network. A light-weight transformer network, termed as LTFormer, is designed to generate deep-level feature descriptors. Furthermore, we implement an innovative triplet loss function, LT Loss, to enhance the matching performance further. Our approach outperforms conventional hand-crafted local feature descriptors and proves equally competitive compared to state-of-the-art deep learning-based methods, even amidst the shortage of annotated data.
Abstract:We present SDTracker, a method that harnesses the potential of synthetic data for multi-object tracking of real-world scenes in a domain generalization and semi-supervised fashion. First, we use the ImageNet dataset as an auxiliary to randomize the style of synthetic data. With out-of-domain data, we further enforce pyramid consistency loss across different "stylized" images from the same sample to learn domain invariant features. Second, we adopt the pseudo-labeling method to effectively utilize the unlabeled MOT17 training data. To obtain high-quality pseudo-labels, we apply proximal policy optimization (PPO2) algorithm to search confidence thresholds for each sequence. When using the unlabeled MOT17 training set, combined with the pure-motion tracking strategy upgraded via developed post-processing, we finally reach 61.4 HOTA.
Abstract:In this paper, we investigate the effect of the strong time-varying transmission distance on the performance of the low-earth orbit (LEO) satellite-terrestrial transmission (STT) system. We propose a new analytical framework using finite-state Markov channel (FSMC) model and time discretization method. Moreover, to demonstrate the applications of the proposed framework, the performances of two adaptive transmissions, rate-adaptive transmission (RAT) and power-adaptive transmission (PAT) schemes, are evaluated for the cases when the transmit power or the transmission rate at the LEO satellite is fixed. Closed-form expressions for the throughput, energy efficiency (EE), and delay outage rate (DOR) of the considered systems are derived and verified, which are capable of addressing the capacity, energy efficiency, and outage rate performance of the considered LEO STT scenarios with the proposed analytical framework.
Abstract:Multi-label learning has attracted the attention of the machine learning community. The problem conversion method Binary Relevance converts a familiar single label into a multi-label algorithm. The binary relevance method is widely used because of its simple structure and efficient algorithm. But binary relevance does not consider the links between labels, making it cumbersome to handle some tasks. This paper proposes a multi-label learning algorithm that can also be used for single-label classification. It is based on standard support vector machines and changes the original single decision hyperplane into two parallel decision hyper-planes, which call multi-label parallel support vector machine (MLPSVM). At the end of the article, MLPSVM is compared with other multi-label learning algorithms. The experimental results show that the algorithm performs well on data sets.