Abstract:Artificial intelligence (AI) and machine learning (ML) are nowadays mature technologies considered essential for driving the evolution of future communications systems. Simultaneously, Wi-Fi technology has constantly evolved over the past three decades and incorporated new features generation after generation, thus gaining in complexity. As such, researchers have observed that AI/ML functionalities may be required to address the upcoming Wi-Fi challenges that will be otherwise difficult to solve with traditional approaches. This paper discusses the role of AI/ML in current and future Wi-Fi networks and depicts the ways forward. A roadmap towards AI/ML-native Wi-Fi, key challenges, standardization efforts, and major enablers are also discussed. An exemplary use case is provided to showcase the potential of AI/ML in Wi-Fi at different adoption stages.
Abstract:Multi-Access Point Coordination (MAPC) is becoming the cornerstone of the IEEE 802.11bn amendment, alias Wi-Fi 8. Among the MAPC features, Coordinated Spatial Reuse (C-SR) stands as one of the most appealing due to its capability to orchestrate simultaneous access point transmissions at a low implementation complexity. In this paper, we contribute to the understanding of C-SR by introducing an analytical model based on Continuous Time Markov Chains (CTMCs) to characterize its throughput and spatial efficiency. Applying the proposed model to several network topologies, we show that C-SR opportunistically enables parallel high-quality transmissions and yields an average throughput gain of up to 59% in comparison to the legacy 802.11 Distributed Coordination Function (DCF) and up to 42% when compared to the 802.11ax Overlapping Basic Service Set Packet Detect (OBSS/PD) mechanism.
Abstract:Next-generation Wi-Fi networks are looking forward to introducing new features like multi-link operation (MLO) to both achieve higher throughput and lower latency. However, given the limited number of available channels, the use of multiple links by a group of contending Basic Service Sets (BSSs) can result in higher interference and channel contention, thus potentially leading to lower performance and reliability. In such a situation, it could be better for all contending BSSs to use less links if that contributes to reduce channel access contention. Recently, reinforcement learning (RL) has proven its potential for optimizing resource allocation in wireless networks. However, the independent operation of each wireless network makes difficult -- if not almost impossible -- for each individual network to learn a good configuration. To solve this issue, in this paper, we propose the use of a Federated Reinforcement Learning (FRL) framework, i.e., a collaborative machine learning approach to train models across multiple distributed agents without exchanging data, to collaboratively learn the the best MLO-Link Allocation (LA) strategy by a group of neighboring BSSs. The simulation results show that the FRL-based decentralized MLO-LA strategy achieves a better throughput fairness, and so a higher reliability -- because it allows the different BSSs to find a link allocation strategy which maximizes the minimum achieved data rate -- compared to fixed, random and RL-based MLO-LA schemes.
Abstract:What will Wi-Fi 8 be? Driven by the strict requirements of emerging applications, next-generation Wi-Fi is set to prioritize Ultra High Reliability (UHR) above all. In this paper, we explore the journey towards IEEE 802.11bn UHR, the amendment that will form the basis of Wi-Fi 8. After providing an overview of the nearly completed Wi-Fi 7 standard, we present new use cases calling for further Wi-Fi evolution. We also outline current standardization, certification, and spectrum allocation activities, sharing updates from the newly formed UHR Study Group. We then introduce the disruptive new features envisioned for Wi-Fi 8 and discuss the associated research challenges. Among those, we focus on access point coordination and demonstrate that it could build upon 802.11be multi-link operation to make Ultra High Reliability a reality in Wi-Fi 8.
Abstract:As wireless standards evolve, more complex functionalities are introduced to address the increasing requirements in terms of throughput, latency, security, and efficiency. To unleash the potential of such new features, artificial intelligence (AI) and machine learning (ML) are currently being exploited for deriving models and protocols from data, rather than by hand-programming. In this paper, we explore the feasibility of applying ML in next-generation wireless local area networks (WLANs). More specifically, we focus on the IEEE 802.11ax spatial reuse (SR) problem and predict its performance through federated learning (FL) models. The set of FL solutions overviewed in this work is part of the 2021 International Telecommunication Union (ITU) AI for 5G Challenge.
Abstract:In this paper, we study the performance of wideband terahertz (THz) communications assisted by an intelligent reflecting surface (IRS). Specifically, we first introduce a generalized channel model that is suitable for electrically large THz IRSs operating in the near-field. Unlike prior works, our channel model takes into account the spherical wavefront of the emitted electromagnetic waves and the spatial-wideband effect. We next show that conventional frequency-flat beamfocusing significantly reduces the power gain due to beam squint, and hence is highly suboptimal. More importantly, we analytically characterize this reduction when the spacing between adjacent reflecting elements is negligible, i.e., holographic reflecting surfaces. Numerical results corroborate our analysis and provide important insights into the design of future IRS-aided THz systems.
Abstract:Intelligent reflecting surface (IRS)-assisted wireless communication is widely deemed a key technology for 6G systems. The main challenge in deploying an IRS-aided terahertz (THz) link, though, is the severe propagation losses at high frequency bands. Hence, a THz IRS is expected to consist of a massive number of reflecting elements to compensate for those losses. However, as the IRS size grows, the conventional far-field assumption starts becoming invalid and the spherical wavefront of the radiated waves must be taken into account. In this work, we focus on the near-field and analytically determine the IRS response in the Fresnel zone by leveraging electromagnetic theory. Specifically, we derive a novel expression for the path loss and beampattern of a holographic IRS, which is then used to model its discrete counterpart. Our analysis sheds light on the modeling aspects and beamfocusing capabilities of THz IRSs.
Abstract:With the advent of Artificial Intelligence (AI)-empowered communications, industry, academia, and standardization organizations are progressing on the definition of mechanisms and procedures to address the increasing complexity of future 5G and beyond communications. In this context, the International Telecommunication Union (ITU) organized the first AI for 5G Challenge to bring industry and academia together to introduce and solve representative problems related to the application of Machine Learning (ML) to networks. In this paper, we present the results gathered from Problem Statement~13 (PS-013), organized by Universitat Pompeu Fabra (UPF), which primary goal was predicting the performance of next-generation Wireless Local Area Networks (WLANs) applying Channel Bonding (CB) techniques. In particular, we overview the ML models proposed by participants (including Artificial Neural Networks, Graph Neural Networks, Random Forest regression, and gradient boosting) and analyze their performance on an open dataset generated using the IEEE 802.11ax-oriented Komondor network simulator. The accuracy achieved by the proposed methods demonstrates the suitability of ML for predicting the performance of WLANs. Moreover, we discuss the importance of abstracting WLAN interactions to achieve better results, and we argue that there is certainly room for improvement in throughput prediction through ML.
Abstract:An intelligent reflecting surface (IRS) at terahertz (THz) bands is expected to have a massive number of reflecting elements to compensate for the severe propagation losses. However, as the IRS size grows, the conventional far-field assumption starts becoming invalid and the spherical wavefront of the radiated waves should be taken into account. In this work, we consider a spherical wave channel model and pursue a comprehensive study of IRS-aided multiple-input multiple-output (MIMO) in terms of power gain and energy efficiency (EE). Specifically, we first analyze the power gain under beamfocusing and beamforming, and show that the latter is suboptimal even for multiple meters away from the IRS. To this end, we derive an approximate, yet accurate, closed-form expression for the loss in the power gain under beamforming. Building on the derived model, we next show that an IRS can significantly improve the EE of MIMO when it operates in the radiating near-field and performs beamfocusing. Numerical results corroborate our analysis and provide novel insights into the design and performance of IRS-assisted THz communication.
Abstract:Terahertz (THz) communication is widely considered as a key enabler for future 6G wireless systems. However, THz links are subject to high propagation losses and inter-symbol interference due to the frequency selectivity of the channel. Massive multiple-input multiple-output (MIMO) along with orthogonal frequency division multiplexing (OFDM) can be used to deal with these problems. Nevertheless, when the propagation delay across the base station (BS) antenna array exceeds the symbol period, the spatial response of the BS array varies across the OFDM subcarriers. This phenomenon, known as beam squint, renders narrowband combining approaches ineffective. Additionally, channel estimation becomes challenging in the absence of combining gain during the training stage. In this work, we address the channel estimation and hybrid combining problems in wideband THz massive MIMO with uniform planar arrays. Specifically, we first introduce a low-complexity beam squint mitigation scheme based on true-time-delay. Next, we propose a novel variant of the popular orthogonal matching pursuit (OMP) algorithm to accurately estimate the channel with low training overhead. Our channel estimation and hybrid combining schemes are analyzed both theoretically and numerically. Moreover, the proposed schemes are extended to the multi-antenna user case. Simulation results are provided showcasing the performance gains offered by our design compared to standard narrowband combining and OMP-based channel estimation.