Abstract:This letter presents a flexible rate-splitting multiple access (RSMA) framework for near-field (NF) integrated sensing and communications (ISAC). The spatial beams configured to meet the communication rate requirements of NF users are simultaneously leveraged to sense an additional NF target. A key innovation lies in its flexibility to select a subset of users for decoding the common stream, enhancing interference management and system performance. The system is designed by minimizing the Cram\'{e}r-Rao bound (CRB) for joint distance and angle estimation through optimized power allocation, common rate allocation, and user selection. This leads to a discrete, non-convex optimization problem. Remarkably, we demonstrate that the preconfigured beams are sufficient for target sensing, eliminating the need for additional probing signals. To solve the optimization problem, an iterative algorithm is proposed combining the quadratic transform and simulated annealing. Simulation results indicate that the proposed scheme significantly outperforms conventional RSMA and space division multiple access (SDMA), reducing distance and angle estimation errors by approximately 100\% and 20\%, respectively.
Abstract:Supporting immense throughput and ubiquitous connectivity holds paramount importance for future wireless networks. To this end, this letter focuses on how the spatial beams configured for legacy near-field (NF) users can be leveraged to serve extra NF or far-field users while ensuring the rate requirements of legacy NF users. In particular, a flexible rate splitting multiple access (RSMA) scheme is proposed to efficiently manage interference, which carefully selects a subset of legacy users to decode the common stream. Beam scheduling, power allocation, common rate allocation, and user selection are jointly optimized to maximize the sum rate of additional users. To solve the formulated discrete non-convex problem, it is split into three subproblems. The accelerated bisection searching, quadratic transform, and simulated annealing approaches are developed to attack them. Simulation results reveal that the proposed transmit scheme and algorithm achieve significant gains over three competing benchmarks.
Abstract:Large-aperture coprime arrays (CAs) are expected to achieve higher sensing resolution than conventional dense arrays (DAs), yet with lower hardware and energy cost. However, existing CA far-field localization methods cannot be directly applied to near-field scenarios due to channel model mismatch. To address this issue, in this paper, we propose an efficient near-field localization method for CAs. Specifically, we first construct an effective covariance matrix, which allows to decouple the target angle-and-range estimation. Then, a customized two-phase multiple signal classification (MUSIC) algorithm for CAs is proposed, which first detects all possible targets' angles by using an angular-domain MUSIC algorithm, followed by the second phase to resolve the true targets' angles and ranges by devising a range-domain MUSIC algorithm. Finally, we show that the proposed method is able to locate more targets than the subarray-based method as well as achieve lower root mean square error (RMSE) than DAs.
Abstract:In this paper, we study efficient multi-beam training design for near-field communications to reduce the beam training overhead of conventional single-beam training methods. In particular, the array-division based multi-beam training method, which is widely used in far-field communications, cannot be directly applied to the near-field scenario, since different sub-arrays may observe different user angles and there exist coverage holes in the angular domain. To address these issues, we first devise a new near-field multi-beam codebook by sparsely activating a portion of antennas to form a sparse linear array (SLA), hence generating multiple beams simultaneously by effective exploiting the near-field grating-lobs. Next, a two-stage near-field beam training method is proposed, for which several candidate user locations are identified firstly based on multi-beam sweeping over time, followed by the second stage to further determine the true user location with a small number of single-beam sweeping. Finally, numerical results show that our proposed multi-beam training method significantly reduces the beam training overhead of conventional single-beam training methods, yet achieving comparable rate performance in data transmission.
Abstract:Extremely large-scale array (XL-array) has emerged as one promising technology to improve the spectral efficiency and spatial resolution of future sixth generation (6G) wireless systems.The upsurge in the antenna number antennas renders communication users more likely to be located in the near-field region, which requires a more accurate spherical (instead of planar) wavefront propagation modeling.This inevitably incurs unaffordable beam training overhead when performing a two-dimensional (2D) beam-search in both the angular and range domains.To address this issue, we first introduce a new sparse discrete Fourier transform (DFT) codebook, which exhibits the angular periodicity in the received beam pattern at the user, which motivates us to propose a three-phase beam training scheme.Specifically, in the first phase, we utilize the sparse DFT codebook for beam sweeping in an angular subspace and estimate candidate user angles according to the received beam pattern.Then, a central sub-array is activated to scan specific candidate angles for resolving the issue of angular ambiguity and identity the user angle.In the third phase, the polar-domain codebook is applied in the estimated angle to search the best effective range of the user.Finally, numerical results show that the proposed beam training scheme enabled by sparse DFT codebook achieves 98.67% reduction as compared with the exhaustive-search scheme, yet without compromising rate performance in the high signal-to-ratio (SNR) regime.
Abstract:Extremely large-scale array (XL-array) has emerged as a promising technology to enable near-field communications for achieving enhanced spectrum efficiency and spatial resolution, by drastically increasing the number of antennas. However, this also inevitably incurs higher hardware and energy cost, which may not be affordable in future wireless systems. To address this issue, we propose in this paper to exploit two types of sparse arrays (SAs) for enabling near-field communications. Specifically, we first consider the linear sparse array (LSA) and characterize its near-field beam pattern. It is shown that despite the achieved beam-focusing gain, the LSA introduces several undesired grating-lobes, which have comparable beam power with the main-lobe and are focused on specific regions. An efficient hybrid beamforming design is then proposed for the LSA to deal with the potential strong inter-user interference (IUI). Next, we consider another form of SA, called extended coprime array (ECA), which is composed of two LSA subarrays with different (coprime) inter-antenna spacing. By characterizing the ECA near-field beam pattern, we show that compared with the LSA with the same array sparsity, the ECA can greatly suppress the beam power of near-field grating-lobes thanks to the offset effect of the two subarrays, albeit with a larger number of grating-lobes. This thus motivates us to propose a customized two-phase hybrid beamforming design for the ECA. Finally, numerical results are presented to demonstrate the rate performance gain of the proposed two SAs over the conventional uniform linear array (ULA).
Abstract:We propose a neural audio generative model, MDCTNet, operating in the perceptually weighted domain of an adaptive modified discrete cosine transform (MDCT). The architecture of the model captures correlations in both time and frequency directions with recurrent layers (RNNs). An audio coding system is obtained by training MDCTNet on a diverse set of fullband monophonic audio signals at 48 kHz sampling, conditioned by a perceptual audio encoder. In a subjective listening test with ten excerpts chosen to be balanced across content types, yet stressful for both codecs, the mean performance of the proposed system for 24 kb/s variable bitrate (VBR) is similar to that of Opus at twice the bitrate.
Abstract:It is crucial to choose the appropriate scale in order to build an effective and informational representation of a complex system. Scientists carefully choose the scales for their experiments to extract the variables that describe the causalities in the system. They found that the coarse scale(macro) is sometimes more causal and informative than the numerous-parameter observations(micro). The phenomenon that the causality emerges by coarse-graining is called Causal Emergence(CE). Based on information theory, a number of recent works quantitatively showed that CE indeed happens while coarse-graining a micro model to the macro. However, the existing works have not discussed the question of why and when the CE happens. We quantitatively analyze the redistribution of uncertainties for coarse-graining and suggest that the redistribution of uncertainties is the cause of causal emergence. We further analyze the thresholds that determine if CE happens or not. From the regularity of the transition probability matrix(TPM) of discrete systems, the mathematical expressions of the model properties are derived. The values of thresholds for different operations are computed. The results provide the critical and specific conditions of CE as helpful suggestions for choosing the proper coarse-graining operation. The results also provided a new way to better understand the nature of causality and causal emergence.
Abstract:In this technical report, we introduce Effidit (Efficient and Intelligent Editing), a digital writing assistant that facilitates users to write higher-quality text more efficiently by using artificial intelligence (AI) technologies. Previous writing assistants typically provide the function of error checking (to detect and correct spelling and grammatical errors) and limited text-rewriting functionality. With the emergence of large-scale neural language models, some systems support automatically completing a sentence or a paragraph. In Effidit, we significantly expand the capacities of a writing assistant by providing functions in five categories: text completion, error checking, text polishing, keywords to sentences (K2S), and cloud input methods (cloud IME). In the text completion category, Effidit supports generation-based sentence completion, retrieval-based sentence completion, and phrase completion. In contrast, many other writing assistants so far only provide one or two of the three functions. For text polishing, we have three functions: (context-aware) phrase polishing, sentence paraphrasing, and sentence expansion, whereas many other writing assistants often support one or two functions in this category. The main contents of this report include major modules of Effidit, methods for implementing these modules, and evaluation results of some key methods.
Abstract:People perceive the world with multiple senses (e.g., through hearing sounds, reading words and seeing objects). However, most existing AI systems only process an individual modality. This paper presents an approach that excels at handling multiple modalities of information with a single model. In our "{SkillNet}" model, different parts of the parameters are specialized for processing different modalities. Unlike traditional dense models that always activate all the model parameters, our model sparsely activates parts of the parameters whose skills are relevant to the task. Such model design enables SkillNet to learn skills in a more interpretable way. We develop our model for five modalities including text, image, sound, video and code. Results show that, SkillNet performs comparably to five modality-specific fine-tuned models. Moreover, our model supports self-supervised pretraining with the same sparsely activated way, resulting in better initialized parameters for different modalities. We find that pretraining significantly improves the performance of SkillNet on five modalities, on par with or even better than baselines with modality-specific pretraining. On the task of Chinese text-to-image retrieval, our final system achieves higher accuracy than existing leading systems including Wukong{ViT-B} and Wenlan 2.0 while using less number of activated parameters.