Abstract:The 5th generation (5G) of wireless systems is being deployed with the aim to provide many sets of wireless communication services, such as low data rates for a massive amount of devices, broadband, low latency, and industrial wireless access. Such an aim is even more complex in the next generation wireless systems (6G) where wireless connectivity is expected to serve any connected intelligent unit, such as software robots and humans interacting in the metaverse, autonomous vehicles, drones, trains, or smart sensors monitoring cities, buildings, and the environment. Because of the wireless devices will be orders of magnitude denser than in 5G cellular systems, and because of their complex quality of service requirements, the access to the wireless spectrum will have to be appropriately shared to avoid congestion, poor quality of service, or unsatisfactory communication delays. Spectrum sharing methods have been the objective of intense study through model-based approaches, such as optimization or game theories. However, these methods may fail when facing the complexity of the communication environments in 5G, 6G, and beyond. Recently, there has been significant interest in the application and development of data-driven methods, namely machine learning methods, to handle the complex operation of spectrum sharing. In this survey, we provide a complete overview of the state-of-theart of machine learning for spectrum sharing. First, we map the most prominent methods that we encounter in spectrum sharing. Then, we show how these machine learning methods are applied to the numerous dimensions and sub-problems of spectrum sharing, such as spectrum sensing, spectrum allocation, spectrum access, and spectrum handoff. We also highlight several open questions and future trends.
Abstract:Over-the-air computation (AirComp) is considered as a communication-efficient solution for data aggregation and distributed learning by exploiting the superposition properties of wireless multi-access channels. However, AirComp is significantly affected by the uneven signal attenuation experienced by different wireless devices. Recently, Cell-free Massive MIMO (mMIMO) has emerged as a promising technology to provide uniform coverage and high rates by joint coherent transmission. In this paper, we investigate AirComp in Cell-free mMIMO systems, taking into account spatially correlated fading and channel estimation errors. In particular, we propose optimal designs of transmit coefficients and receive combing at different levels of cooperation among access points. Numerical results demonstrate that Cell-free mMIMO using fully centralized processing significantly outperforms conventional Cellular mMIMO with regard to the mean squared error (MSE). Moreover, we show that Cell-free mMIMO using local processing and large-scale fading decoding can achieve a lower MSE than Cellular mMIMO when the wireless devices have limited power budgets.
Abstract:Over-the-air computation (AirComp) leverages the signal-superposition characteristic of wireless multiple access channels to perform mathematical computations. Initially introduced to enhance communication reliability in interference channels and wireless sensor networks, AirComp has more recently found applications in task-oriented communications, namely, for wireless distributed learning and in wireless control systems. Its adoption aims to address latency challenges arising from an increased number of edge devices or IoT devices accessing the constrained wireless spectrum. This paper focuses on the physical layer of these systems, specifically on the waveform and the signal processing aspects at the transmitter and receiver to meet the challenges that AirComp presents within the different contexts and use cases.
Abstract:In this paper, we consider the ChannelComp framework, which facilitates the computation of desired functions by multiple transmitters over a common receiver using digital modulations across a multiple access channel. While ChannelComp currently offers a broad framework for computation by designing digital constellations for over-the-air computation and employing symbol-level encoding, encoding the repeated transmissions of the same symbol and using the corresponding received sequence may significantly improve the computation performance and reduce the encoding complexity. In this paper, we propose an enhancement involving the encoding of the repetitive transmission of the same symbol at each transmitter over multiple time slots and the design of constellation diagrams, with the aim of minimizing computational errors. We frame this enhancement as an optimization problem, which jointly identifies the constellation diagram and the channel code for repetition, which we call ReChCompCode. To manage the computational complexity of the optimization, we divide it into two tractable subproblems. Through numerical experiments, we evaluate the performance of ReChCompCode. The simulation results reveal that ReChCompCode can reduce the computation error by approximately up to 30 dB compared to standard ChannelComp, particularly for product functions.
Abstract:This paper considers a downlink cell-free multiple-input multiple-output (MIMO) network in which multiple multi-antenna base stations (BSs) serve multiple users via coherent joint transmission. In order to reduce the energy consumption by radio frequency components, each BS selects a subset of antennas for downlink data transmission after estimating the channel state information (CSI). We aim to maximize the sum spectral efficiency by jointly optimizing the antenna selection and precoding design. To alleviate the fronthaul overhead and enable real-time network operation, we propose a distributed scalable machine learning algorithm. In particular, at each BS, we deploy a convolutional neural network (CNN) for antenna selection and a graph neural network (GNN) for precoding design. Different from conventional centralized solutions that require a large amount of CSI and signaling exchange among the BSs, the proposed distributed machine learning algorithm takes only locally estimated CSI as input. With well-trained learning models, it is shown that the proposed algorithm significantly outperforms the distributed baseline schemes and achieves a sum spectral efficiency comparable to its centralized counterpart.
Abstract:In this work, we investigate federated edge learning over a fading multiple access channel. To alleviate the communication burden between the edge devices and the access point, we introduce a pioneering digital over-the-air computation strategy employing q-ary quadrature amplitude modulation, culminating in a low latency communication scheme. Indeed, we propose a new federated edge learning framework in which edge devices use digital modulation for over-the-air uplink transmission to the edge server while they have no access to the channel state information. Furthermore, we incorporate multiple antennas at the edge server to overcome the fading inherent in wireless communication. We analyze the number of antennas required to mitigate the fading impact effectively. We prove a non-asymptotic upper bound for the mean squared error for the proposed federated learning with digital over-the-air uplink transmissions under both noisy and fading conditions. Leveraging the derived upper bound, we characterize the convergence rate of the learning process of a non-convex loss function in terms of the mean square error of gradients due to the fading channel. Furthermore, we substantiate the theoretical assurances through numerical experiments concerning mean square error and the convergence efficacy of the digital federated edge learning framework. Notably, the results demonstrate that augmenting the number of antennas at the edge server and adopting higher-order modulations improve the model accuracy up to 60\%.
Abstract:Over-the-air computation (AirComp) is a well-known technique by which several wireless devices transmit by analog amplitude modulation to achieve a sum of their transmit signals at a common receiver. The underlying physical principle is the superposition property of the radio waves. Since such superposition is analog and in amplitude, it is natural that AirComp uses analog amplitude modulations. Unfortunately, this is impractical because most wireless devices today use digital modulations. It would be highly desirable to use digital communications because of their numerous benefits, such as error correction, synchronization, acquisition of channel state information, and widespread use. However, when we use digital modulations for AirComp, a general belief is that the superposition property of the radio waves returns a meaningless overlapping of the digital signals. In this paper, we break through such beliefs and propose an entirely new digital channel computing method named ChannelComp, which can use digital as well as analog modulations. We propose a feasibility optimization problem that ascertains the optimal modulation for computing arbitrary functions over-the-air. Additionally, we propose pre-coders to adapt existing digital modulation schemes for computing the function over the multiple access channel. The simulation results verify the superior performance of ChannelComp compared to AirComp, particularly for the product functions, with more than 10 dB improvement of the computation error.
Abstract:Over-the-air computation (OAC) is a promising wireless communication method for aggregating data from many devices in dense wireless networks. The fundamental idea of OAC is to exploit signal superposition to compute functions of multiple simultaneously transmitted signals. However, the time- and phase-alignment of these superimposed signals have a significant effect on the quality of function computation. In this study, we analyze the OAC problem for a system with unknown random time delays and phase shifts. We show that the classical matched filter does not produce optimal results, and generates bias in the function estimates. To counteract this, we propose a new filter design and show that, under a bound on the maximum time delay, it is possible to achieve unbiased function computation. Additionally, we propose a Tikhonov regularization problem that produces an optimal filter given a tradeoff between the bias and noise-induced variance of the function estimates. When the time delays are long compared to the length of the transmitted pulses, our filter vastly outperforms the matched filter both in terms of bias and mean-squared error (MSE). For shorter time delays, our proposal yields similar MSE as the matched filter, while reducing the bias.
Abstract:The performance of modern wireless communications systems depends critically on the quality of the available channel state information (CSI) at the transmitter and receiver. Several previous works have proposed concepts and algorithms that help maintain high quality CSI even in the presence of high mobility and channel aging, such as temporal prediction schemes that employ neural networks. However, it is still unclear which neural network-based scheme provides the best performance in terms of prediction quality, training complexity and practical feasibility. To investigate such a question, this paper first provides an overview of state-of-the-art neural networks applicable to channel prediction and compares their performance in terms of prediction quality. Next, a new comparative analysis is proposed for four promising neural networks with different prediction horizons. The well-known tapped delay channel model recommended by the Third Generation Partnership Program is used for a standardized comparison among the neural networks. Based on this comparative evaluation, the advantages and disadvantages of each neural network are discussed and guidelines for selecting the best-suited neural network in channel prediction applications are given.
Abstract:We consider the problem of gridless blind deconvolution and demixing (GB2D) in scenarios where multiple users communicate messages through multiple unknown channels, and a single base station (BS) collects their contributions. This scenario arises in various communication fields, including wireless communications, the Internet of Things, over-the-air computation, and integrated sensing and communications. In this setup, each user's message is convolved with a multi-path channel formed by several scaled and delayed copies of Dirac spikes. The BS receives a linear combination of the convolved signals, and the goal is to recover the unknown amplitudes, continuous-indexed delays, and transmitted waveforms from a compressed vector of measurements at the BS. However, in the absence of any prior knowledge of the transmitted messages and channels, GB2D is highly challenging and intractable in general. To address this issue, we assume that each user's message follows a distinct modulation scheme living in a known low-dimensional subspace. By exploiting these subspace assumptions and the sparsity of the multipath channels for different users, we transform the nonlinear GB2D problem into a matrix tuple recovery problem from a few linear measurements. To achieve this, we propose a semidefinite programming optimization that exploits the specific low-dimensional structure of the matrix tuple to recover the messages and continuous delays of different communication paths from a single received signal at the BS. Finally, our numerical experiments show that our proposed method effectively recovers all transmitted messages and the continuous delay parameters of the channels with a sufficient number of samples.