Abstract:This paper explores distributed Reconfigurable Intelligent Surfaces (RISs) by introducing a cooperative dimension that enhances adaptability and performance. It focuses on strategically deploying multiple RISs to improve Line-of-Sight (LoS) connectivity with the Base Station (BS) and among RISs, thereby aiding users in areas with weak BS coverage and enhancing spatial multiplexing gain. Each RIS can act as a main RIS (mRIS) to directly support users or as an intermediate RIS (iRIS) to reflect signals to another mRIS. This dual functionality allows for flexible responses to changing conditions. We implement an inter-RIS signal focusing design for phase shifts, creating a tailored codebook for precise control over signal direction. This design considers the interplay of incidence and reflection angles to maximize reflected signal power, based on the RIS response function and the physical properties of the RIS elements.
Abstract:Zero-order (ZO) optimization is a powerful tool for dealing with realistic constraints. On the other hand, the gradient-tracking (GT) technique proved to be an efficient method for distributed optimization aiming to achieve consensus. However, it is a first-order (FO) method that requires knowledge of the gradient, which is not always possible in practice. In this work, we introduce a zero-order distributed optimization method based on a one-point estimate of the gradient tracking technique. We prove that this new technique converges with a single noisy function query at a time in the non-convex setting. We then establish a convergence rate of $O(\frac{1}{\sqrt[3]{K}})$ after a number of iterations K, which competes with that of $O(\frac{1}{\sqrt[4]{K}})$ of its centralized counterparts. Finally, a numerical example validates our theoretical results.
Abstract:Federated learning (FL) is a popular machine learning technique that enables multiple users to collaboratively train a model while maintaining the user data privacy. A significant challenge in FL is the communication bottleneck in the upload direction, and thus the corresponding energy consumption of the devices, attributed to the increasing size of the model/gradient. In this paper, we address this issue by proposing a zero-order (ZO) optimization method that requires the upload of a quantized single scalar per iteration by each device instead of the whole gradient vector. We prove its theoretical convergence and find an upper bound on its convergence rate in the non-convex setting, and we discuss its implementation in practical scenarios. Our FL method and the corresponding convergence analysis take into account the impact of quantization and packet dropping due to wireless errors. We show also the superiority of our method, in terms of communication overhead and energy consumption, as compared to standard gradient-based FL methods.
Abstract:In the literature, machine learning (ML) has been implemented at the base station (BS) and user equipment (UE) to improve the precision of downlink channel state information (CSI). However, ML implementation at the UE can be infeasible for various reasons, such as UE power consumption. Motivated by this issue, we propose a CSI learning mechanism at BS, called CSILaBS, to avoid ML at UE. To this end, by exploiting channel predictor (CP) at BS, a light-weight predictor function (PF) is considered for feedback evaluation at the UE. CSILaBS reduces over-the-air feedback overhead, improves CSI quality, and lowers the computation cost of UE. Besides, in a multiuser environment, we propose various mechanisms to select the feedback by exploiting PF while aiming to improve CSI accuracy. We also address various ML-based CPs, such as NeuralProphet (NP), an ML-inspired statistical algorithm. Furthermore, inspired to use a statistical model and ML together, we propose a novel hybrid framework composed of a recurrent neural network and NP, which yields better prediction accuracy than individual models. The performance of CSILaBS is evaluated through an empirical dataset recorded at Nokia Bell-Labs. The outcomes show that ML elimination at UE can retain performance gains, for example, precoding quality.
Abstract:Federated learning (FL) is a novel approach to machine learning that allows multiple edge devices to collaboratively train a model without disclosing their raw data. However, several challenges hinder the practical implementation of this approach, especially when devices and the server communicate over wireless channels, as it suffers from communication and computation bottlenecks in this case. By utilizing a communication-efficient framework, we propose a novel zero-order (ZO) method with a one-point gradient estimator that harnesses the nature of the wireless communication channel without requiring the knowledge of the channel state coefficient. It is the first method that includes the wireless channel in the learning algorithm itself instead of wasting resources to analyze it and remove its impact. The two main difficulties of this work are that in FL, the objective function is usually not convex, which makes the extension of FL to ZO methods challenging, and that including the impact of wireless channels requires extra attention. However, we overcome these difficulties and comprehensively analyze the proposed zero-order federated learning (ZOFL) framework. We establish its convergence theoretically, and we prove a convergence rate of $O(\frac{1}{\sqrt[3]{K}})$ in the nonconvex setting. We further demonstrate the potential of our algorithm with experimental results, taking into account independent and identically distributed (IID) and non-IID device data distributions.
Abstract:In this work, we consider a distributed multi-agent stochastic optimization problem, where each agent holds a local objective function that is smooth and convex, and that is subject to a stochastic process. The goal is for all agents to collaborate to find a common solution that optimizes the sum of these local functions. With the practical assumption that agents can only obtain noisy numerical function queries at exactly one point at a time, we extend the distributed stochastic gradient-tracking method to the bandit setting where we don't have an estimate of the gradient, and we introduce a zero-order (ZO) one-point estimate (1P-DSGT). We analyze the convergence of this novel technique for smooth and convex objectives using stochastic approximation tools, and we prove that it converges almost surely to the optimum. We then study the convergence rate for when the objectives are additionally strongly convex. We obtain a rate of $O(\frac{1}{\sqrt{k}})$ after a sufficient number of iterations $k > K_2$ which is usually optimal for techniques utilizing one-point estimators. We also provide a regret bound of $O(\sqrt{k})$, which is exceptionally good compared to the aforementioned techniques. We further illustrate the usefulness of the proposed technique using numerical experiments.
Abstract:Massive multiple-input multiple-output (mMIMO) regime reaps the benefits of spatial diversity and multiplexing gains, subject to precise channel state information (CSI) acquisition. In the current communication architecture, the downlink CSI is estimated by the user equipment (UE) via dedicated pilots and then fed back to the gNodeB (gNB). The feedback information is compressed with the goal of reducing over-the-air overhead. This compression increases the inaccuracy of acquired CSI, thus degrading the overall spectral efficiency. This paper proposes a computationally inexpensive machine learning (ML)-based CSI feedback algorithm, which exploits twin channel predictors. The proposed approach can work for both time-division duplex (TDD) and frequency-division duplex (FDD) systems, and it allows to reduce feedback overhead and improves the acquired CSI accuracy. To observe real benefits, we demonstrate the performance of the proposed approach using the empirical data recorded at the Nokia campus in Stuttgart, Germany. Numerical results show the effectiveness of the proposed approach in terms of reducing overhead, minimizing quantization errors, increasing spectral efficiency, cosine similarity, and precoding gain compared to the traditional CSI feedback mechanism.
Abstract:Channel state information (CSI) is of pivotal importance as it enables wireless systems to adapt transmission parameters more accurately, thus improving the system's overall performance. However, it becomes challenging to acquire accurate CSI in a highly dynamic environment, mainly due to multi-path fading. Inaccurate CSI can deteriorate the performance, particularly of a massive multiple-input multiple-output (mMIMO) system. This paper adapts machine learning (ML) for CSI prediction. Specifically, we exploit time-series models of deep learning (DL) such as recurrent neural network (RNN) and Bidirectional long-short term memory (BiLSTM). Further, we use NeuralProphet (NP), a recently introduced time-series model, composed of statistical components, e.g., auto-regression (AR) and Fourier terms, for CSI prediction. Inspired by statistical models, we also develop a novel hybrid framework comprising RNN and NP to achieve better prediction accuracy. The proposed channel predictors (CPs) performance is evaluated on a real-time dataset recorded at the Nokia Bell-Labs campus in Stuttgart, Germany. Numerical results show that DL brings performance gain when used with statistical models and showcases robustness.
Abstract:With the deployment of 5G networks, standards organizations have started working on the design phase for sixth-generation (6G) networks. 6G networks will be immensely complex, requiring more deployment time, cost and management efforts. On the other hand, mobile network operators demand these networks to be intelligent, self-organizing, and cost-effective to reduce operating expenses (OPEX). Machine learning (ML), a branch of artificial intelligence (AI), is the answer to many of these challenges providing pragmatic solutions, which can entirely change the future of wireless network technologies. By using some case study examples, we briefly examine the most compelling problems, particularly at the physical (PHY) and link layers in cellular networks where ML can bring significant gains. We also review standardization activities in relation to the use of ML in wireless networks and future timeline on readiness of standardization bodies to adapt to these changes. Finally, we highlight major issues in ML use in the wireless technology, and provide potential directions to mitigate some of them in 6G wireless networks.
Abstract:In wireless communication, accurate channel state information (CSI) is of pivotal importance. In practice, due to processing and feedback delays, estimated CSI can be outdated, which can severely deteriorate the performance of the communication system. Besides, to feedback estimated CSI, a strong compression of the CSI, evaluated at the user equipment (UE), is performed to reduce the over-the-air (OTA) overhead. Such compression strongly reduces the precision of the estimated CSI, which ultimately impacts the performance of multiple-input multiple-output (MIMO) precoding. Motivated by such issues, we present a novel scalable idea of reporting CSI in wireless networks, which is applicable to both time-division duplex (TDD) and frequency-division duplex (FDD) systems. In particular, the novel approach introduces the use of a channel predictor function, e.g., Kalman filter (KF), at both ends of the communication system to predict CSI. Simulation-based results demonstrate that the novel approach reduces not only the channel mean-squared-error (MSE) but also the OTA overhead to feedback the estimated CSI when there is immense variation in the mobile radio channel. Besides, in the immobile radio channel, feedback can be eliminated, which brings the benefit of further reducing the OTA overhead. Additionally, the proposed method provides a significant signal-to-noise ratio (SNR) gain in both the channel conditions, i.e., highly mobile and immobile.