Abstract:Low-coherence sequences with low peak-to-average power ratio (PAPR) are crucial for multi-carrier wireless communication systems and are used for pilots, spreading sequences, and so on. This letter proposes an efficient low-coherence sequence design algorithm (LOCEDA) that can generate any number of sequences of any length that satisfy user-defined PAPR constraints while supporting flexible subcarrier assignments in orthogonal frequency-division multiple access (OFDMA) systems. We first visualize the low-coherence sequence design problem under PAPR constraints as resolving collisions between hyperspheres. By iteratively adjusting the radii and positions of these hyperspheres, we effectively generate low-coherence sequences that strictly satisfy the imposed PAPR constraints. Simulation results (i) confirm that LOCEDA outperforms existing methods, (ii) demonstrate its flexibility, and (iii) highlight its potential for various application scenarios.
Abstract:The next sixth generation (6G) networks are envisioned to integrate sensing and communications in a single system, thus greatly improving spectrum utilization and reducing hardware costs. Low earth orbit (LEO) satellite communications combined with massive multiple-input multiple-output (MIMO) technology holds significant promise in offering ubiquitous and seamless connectivity with high data rates. Existing integrated sensing and communications (ISAC) studies mainly focus on terrestrial systems, while operating ISAC in massive MIMO LEO satellite systems is promising to provide high-capacity communication and flexible sensing ubiquitously. In this paper, we first give an overview of LEO satellite systems and ISAC and consider adopting ISAC in the massive MIMO LEO satellite systems. Then, the recent research advances are presented. A discussion on related challenges and key enabling technologies follows. Finally, we point out some open issues and promising research directions.
Abstract:The training and inference of large language models (LLMs) are together a costly process that transports knowledge from raw data to meaningful computation. Inspired by the memory hierarchy of the human brain, we reduce this cost by equipping LLMs with explicit memory, a memory format cheaper than model parameters and text retrieval-augmented generation (RAG). Conceptually, with most of its knowledge externalized to explicit memories, the LLM can enjoy a smaller parameter size, training cost, and inference cost, all proportional to the amount of remaining "abstract knowledge". As a preliminary proof of concept, we train from scratch a 2.4B LLM, which achieves better performance than much larger LLMs as well as RAG models, and maintains higher decoding speed than RAG. The model is named $\text{Memory}^3$, since explicit memory is the third form of memory in LLMs after implicit memory (model parameters) and working memory (context key-values). We introduce a memory circuitry theory to support the externalization of knowledge, and present novel techniques including a memory sparsification mechanism that makes storage tractable and a two-stage pretraining scheme that facilitates memory formation.
Abstract:The unsourced random access (URA) has emerged as a viable scheme for supporting the massive machine-type communications (mMTC) in the sixth generation (6G) wireless networks. Notably, the tensor-based URA (TURA), with its inherent tensor structure, stands out by simultaneously enhancing performance and reducing computational complexity for the multi-user separation, especially in mMTC networks with a large numer of active devices. However, current TURA scheme lacks the soft decoder, thus precluding the incorporation of existing advanced coding techniques. In order to fully explore the potential of the TURA, this paper investigates the Polarcoded TURA (PTURA) scheme and develops the corresponding iterative Bayesian receiver with feedback (IBR-FB). Specifically, in the IBR-FB, we propose the Grassmannian modulation-aided Bayesian tensor decomposition (GM-BTD) algorithm under the variational Bayesian learning (VBL) framework, which leverages the property of the Grassmannian modulation to facilitate the convergence of the VBL process, and has the ability to generate the required soft information without the knowledge of the number of active devices. Furthermore, based on the soft information produced by the GM-BTD, we design the soft Grassmannian demodulator in the IBR-FB. Extensive simulation results demonstrate that the proposed PTURA in conjunction with the IBR-FB surpasses the existing state-of-the-art unsourced random access scheme in terms of accuracy and computational complexity.
Abstract:In massive multiple-input multiple-output (MIMO) systems, the downlink transmission performance heavily relies on accurate channel state information (CSI). Constrained by the transmitted power, user equipment always transmits sounding reference signals (SRSs) to the base station through frequency hopping, which will be leveraged to estimate uplink CSI and subsequently predict downlink CSI. This paper aims to investigate joint channel estimation and prediction (JCEP) for massive MIMO with frequency hopping sounding (FHS). Specifically, we present a multiple-subband (MS) delay-angle-Doppler (DAD) domain channel model with off-grid basis to tackle the energy leakage problem. Furthermore, we formulate the JCEP problem with FHS as a multiple measurement vector (MMV) problem, facilitating the sharing of common CSI across different subbands. To solve this problem, we propose an efficient Off-Grid-MS hybrid message passing (HMP) algorithm under the constrained Bethe free energy (BFE) framework. Aiming to address the lack of prior CSI in practical scenarios, the proposed algorithm can adaptively learn the hyper-parameters of the channel by minimizing the corresponding terms in the BFE expression. To alleviate the complexity of channel hyper-parameter learning, we leverage the approximations of the off-grid matrices to simplify the off-grid hyper-parameter estimation. Numerical results illustrate that the proposed algorithm can effectively mitigate the energy leakage issue and exploit the common CSI across different subbands, acquiring more accurate CSI compared to state-of-the-art counterparts.
Abstract:In this paper, we propose a unified framework based on equivariance for the design of artificial intelligence (AI)-assisted technologies in multi-user multiple-input-multiple-output (MU-MIMO) systems. We first provide definitions of multidimensional equivariance, high-order equivariance, and multidimensional invariance (referred to collectively as tensor equivariance). On this basis, by investigating the design of precoding and user scheduling, which are key techniques in MU-MIMO systems, we delve deeper into revealing tensor equivariance of the mappings from channel information to optimal precoding tensors, precoding auxiliary tensors, and scheduling indicators, respectively. To model mappings with tensor equivariance, we propose a series of plug-and-play tensor equivariant neural network (TENN) modules, where the computation involving intricate parameter sharing patterns is transformed into concise tensor operations. Building upon TENN modules, we propose the unified tensor equivariance framework that can be applicable to various communication tasks, based on which we easily accomplish the design of corresponding AI-assisted precoding and user scheduling schemes. Simulation results demonstrate that the constructed precoding and user scheduling methods achieve near-optimal performance while exhibiting significantly lower computational complexity and generalization to inputs with varying sizes across multiple dimensions. This validates the superiority of TENN modules and the unified framework.
Abstract:Integrated communications and localization (ICAL) will play an important part in future sixth generation (6G) networks for the realization of Internet of Everything (IoE) to support both global communications and seamless localization. Massive multiple-input multiple-output (MIMO) low earth orbit (LEO) satellite systems have great potential in providing wide coverage with enhanced gains, and thus are strong candidates for realizing ubiquitous ICAL. In this paper, we develop a wideband massive MIMO LEO satellite system to simultaneously support wireless communications and localization operations in the downlink. In particular, we first characterize the signal propagation properties and derive a localization performance bound. Based on these analyses, we focus on the hybrid analog/digital precoding design to achieve high communication capability and localization precision. Numerical results demonstrate that the proposed ICAL scheme supports both the wireless communication and localization operations for typical system setups.
Abstract:This paper investigates the robust design of symbol-level precoding (SLP) for multiuser multiple-input multiple-output (MIMO) downlink transmission with imperfect channel state information (CSI) caused by channel aging. By utilizing the a posteriori channel model based on the widely adopted jointly correlated channel model, the imperfect CSI is modeled as the statistical CSI incorporating the channel mean and channel variance information with spatial correlation. With the signal model in the presence of channel aging, we formulate the signal-to-noise-plus-interference ratio (SINR) balancing and minimum mean square error (MMSE) problems for robust SLP design. The former targets to maximize the minimum SINR across users, while the latter minimizes the mean square error between the received signal and the target constellation point. When it comes to massive MIMO scenarios, the increment in the number of antennas poses a computational complexity challenge, limiting the deployment of SLP schemes. To address such a challenge, we simplify the objective function of the SINR balancing problem and further derive a closed-form SLP scheme. Besides, by approximating the matrix involved in the computation, we modify the proposed algorithm and develop an MMSE-based SLP scheme with lower computation complexity. Simulation results confirm the superiority of the proposed schemes over the state-of-the-art SLP schemes.
Abstract:Retrieval-Augmented Generation (RAG) is a technique that enhances the capabilities of large language models (LLMs) by incorporating external knowledge sources. This method addresses common LLM limitations, including outdated information and the tendency to produce inaccurate "hallucinated" content. However, the evaluation of RAG systems is challenging, as existing benchmarks are limited in scope and diversity. Most of the current benchmarks predominantly assess question-answering applications, overlooking the broader spectrum of situations where RAG could prove advantageous. Moreover, they only evaluate the performance of the LLM component of the RAG pipeline in the experiments, and neglect the influence of the retrieval component and the external knowledge database. To address these issues, this paper constructs a large-scale and more comprehensive benchmark, and evaluates all the components of RAG systems in various RAG application scenarios. Specifically, we have categorized the range of RAG applications into four distinct types-Create, Read, Update, and Delete (CRUD), each representing a unique use case. "Create" refers to scenarios requiring the generation of original, varied content. "Read" involves responding to intricate questions in knowledge-intensive situations. "Update" focuses on revising and rectifying inaccuracies or inconsistencies in pre-existing texts. "Delete" pertains to the task of summarizing extensive texts into more concise forms. For each of these CRUD categories, we have developed comprehensive datasets to evaluate the performance of RAG systems. We also analyze the effects of various components of the RAG system, such as the retriever, the context length, the knowledge base construction, and the LLM. Finally, we provide useful insights for optimizing the RAG technology for different scenarios.
Abstract:The monitoring of vital signs such as heart rate (HR) and respiratory rate (RR) during sleep is important for the assessment of sleep quality and detection of sleep disorders. Camera-based HR and RR monitoring gained popularity in sleep monitoring in recent years. However, they are all facing with serious privacy issues when using a video camera in the sleeping scenario. In this paper, we propose to use the defocused camera to measure vital signs from optically blurred images, which can fundamentally eliminate the privacy invasion as face is difficult to be identified in obtained blurry images. A spatial-redundant framework involving living-skin detection is used to extract HR and RR from the defocused camera in NIR, and a motion metric is designed to exclude outliers caused by body motions. In the benchmark, the overall Mean Absolute Error (MAE) for HR measurement is 4.4 bpm, for RR measurement is 5.9 bpm. Both have quality drops as compared to the measurement using a focused camera, but the degradation in HR is much less, i.e. HR measurement has strong correlation with the reference ($R \geq 0.90$). Preliminary experiments suggest that it is feasible to use a defocused camera for cardio-respiratory monitoring while protecting the privacy. Further improvement is needed for robust RR measurement, such as by PPG-modulation based RR extraction.