Centre for Wireless Communications
Abstract:We characterize three near-field sub-regions for phased array antennas by elaborating on the boundaries {\it Fraunhofer}, {\it radial-focal}, and {\it non-radiating} distances. The {\it Fraunhofer distance} which is the boundary between near and far field has been well studied in the literature on the principal axis (PA) of single-element center-fed antennas, where PA denotes the axis perpendicular to the antenna surface passing from the antenna center. The results are also valid for phased arrays if the PA coincides with the boresight, which is not commonly the case in practice. In this work, we completely characterize the Fraunhofer distance by considering various angles between the PA and the boresight. For the {\it radial-focal distance}, below which beamfocusing is feasible in the radial domain, a formal characterization of the corresponding region based on the general model of near-field channels (GNC) is missing in the literature. We investigate this and elaborate that the maximum-ratio-transmission (MRT) beamforming based on the simple uniform spherical wave (USW) channel model results in a radial gap between the achieved and the desired focal points. While the gap vanishes when the array size $N$ becomes sufficiently large, we propose a practical algorithm to remove this gap in the non-asymptotic case when $N$ is not very large. Finally, the {\it non-radiating} distance, below which the reactive power dominates active power, has been studied in the literature for single-element antennas. We analytically explore this for phased arrays and show how different excitation phases of the antenna array impact it. We also clarify some misconceptions about the non-radiating and Fresnel distances prevailing in the literature.
Abstract:Efficient Random Access (RA) is critical for enabling reliable communication in Industrial Internet of Things (IIoT) networks. Herein, we propose a deep reinforcement learning based distributed RA scheme, entitled Neural Network-Based Bandit (NNBB), for the IIoT alarm scenario. In such a scenario, the devices may detect a common critical event, and the goal is to ensure the alarm information is delivered successfully from at least one device. The proposed NNBB scheme is implemented at each device, where it trains itself online and establishes implicit inter-device coordination to achieve the common goal. Devices can transmit simultaneously on multiple orthogonal channels and each possible transmission pattern constitutes a possible action for the NNBB, which uses a deep neural network to determine the action. Our simulation results show that as the number of devices in the network increases, so does the performance gain of the NNBB compared to the Multi-Armed Bandit (MAB) RA benchmark. For instance, NNBB experiences a 7% success rate drop when there are four channels and the number of devices increases from 10 to 60, while MAB faces a 25% drop.
Abstract:Radio frequency (RF) wireless power transfer (WPT) is a key technology for future low-power wireless systems. However, the inherently low end-to-end power transfer efficiency (PTE) is challenging for practical applications. The main factors contributing to it are the channel losses, transceivers' power consumption, and losses related, e.g., to the digital-to-analog converter (DAC), high-power amplifier, and rectenna. Optimizing PTE requires careful consideration of these factors, motivating the current work. Herein, we consider an analog multi-antenna power transmitter that aims to charge a single energy harvester. We first provide a mathematical framework to calculate the harvested power from multi-tone signal transmissions and the system power consumption. Then, we formulate the joint waveform and analog beamforming design problem to minimize power consumption and meet the charging requirements. Finally, we propose an optimization approach relying on swarm intelligence to solve the specified problem. Simulation results quantify the power consumption reduction as the DAC, phase shifters resolution, and antenna length are increased, while it is seen that increasing system frequency results in higher power consumption.
Abstract:In this letter, we study an attack that leverages a reconfigurable intelligent surface (RIS) to induce harmful interference toward multiple users in massive multiple-input multiple-output (mMIMO) systems during the data transmission phase. We propose an efficient and flexible weighted-sum projected gradient-based algorithm for the attacker to optimize the RIS reflection coefficients without knowing legitimate user channels. To counter such a threat, we propose two reception strategies. Simulation results demonstrate that our malicious algorithm outperforms baseline strategies while offering adaptability for targeting specific users. At the same time, our results show that our mitigation strategies are effective even if only an imperfect estimate of the cascade RIS channel is available.
Abstract:This paper delves into the unexplored frequency-dependent characteristics of beyond diagonal reconfigurable intelligent surfaces (BD-RISs). A generalized practical frequency-dependent reflection model is proposed as a fundamental framework for configuring fully-connected and group-connected RISs in a multi-band multi-base station (BS) multiple-input multiple-output (MIMO) network. Leveraging this practical model, multi-objective optimization strategies are formulated to maximize the received power at multiple users connected to different BSs, each operating under a distinct carrier frequency. By relying on matrix theory and exploiting the symmetric structure of the reflection matrices inherent to BD-RISs, closed-form relaxed solutions for the challenging optimization problems are derived. The ideal solutions are then combined with codebook-based approaches to configure the practical capacitance values for the BD-RISs. Simulation results reveal the frequency-dependent behaviors of different RIS architectures and demonstrate the effectiveness of the proposed schemes. Notably, BD-RISs exhibit superior resilience to frequency deviations compared to conventional single-connected RISs. Moreover, the proposed optimization approaches prove effective in enabling the targeted operation of BD-RISs across one or more carrier frequencies. The results also shed light on the potential for harmful interference in the absence of proper synchronization between RISs and adjacent BSs.
Abstract:Multi-access Edge Computing (MEC) can be implemented together with Open Radio Access Network (O-RAN) over commodity platforms to offer low-cost deployment and bring the services closer to end-users. In this paper, a joint O-RAN/MEC orchestration using a Bayesian deep reinforcement learning (RL)-based framework is proposed that jointly controls the O-RAN functional splits, the allocated resources and hosting locations of the O-RAN/MEC services across geo-distributed platforms, and the routing for each O-RAN/MEC data flow. The goal is to minimize the long-term overall network operation cost and maximize the MEC performance criterion while adapting possibly time-varying O-RAN/MEC demands and resource availability. This orchestration problem is formulated as Markov decision process (MDP). However, the system consists of multiple BSs that share the same resources and serve heterogeneous demands, where their parameters have non-trivial relations. Consequently, finding the exact model of the underlying system is impractical, and the formulated MDP renders in a large state space with multi-dimensional discrete action. To address such modeling and dimensionality issues, a novel model-free RL agent is proposed for our solution framework. The agent is built from Double Deep Q-network (DDQN) that tackles the large state space and is then incorporated with action branching, an action decomposition method that effectively addresses the multi-dimensional discrete action with linear increase complexity. Further, an efficient exploration-exploitation strategy under a Bayesian framework using Thomson sampling is proposed to improve the learning performance and expedite its convergence. Trace-driven simulations are performed using an O-RAN-compliant model. The results show that our approach is data-efficient (i.e., converges faster) and increases the returned reward by 32\% than its non-Bayesian version.
Abstract:Spatially correlated device activation is a typical feature of the Internet of Things (IoT). This motivates the development of channel scheduling (CS) methods that mitigate device collisions efficiently in such scenarios, which constitutes the scope of this work. Specifically, we present a quadratic program (QP) formulation for the CS problem considering the joint activation probabilities among devices. This formulation allows the devices to stochastically select the transmit channels, thus, leading to a soft-clustering approach. We prove that the optimal QP solution can only be attained when it is transformed into a hard-clustering problem, leading to a pure integer QP, which we transform into a pure integer linear program (PILP). We leverage the branch-and-cut (B&C) algorithm to solve PILP optimally. Due to the high computational cost of B&C, we resort to some sub-optimal clustering methods with low computational costs to tackle the clustering problem in CS. Our findings demonstrate that the CS strategy, sourced from B&C, significantly outperforms those derived from sub-optimal clustering methods, even amidst increased device correlation.
Abstract:Diffusion models are at the vanguard of generative AI research with renowned solutions such as ImageGen by Google Brain and DALL.E 3 by OpenAI. Nevertheless, the potential merits of diffusion models for communication engineering applications are not fully understood yet. In this paper, we aim to unleash the power of generative AI for PHY design of constellation symbols in communication systems. Although the geometry of constellations is predetermined according to networking standards, e.g., quadrature amplitude modulation (QAM), probabilistic shaping can design the probability of occurrence (generation) of constellation symbols. This can help improve the information rate and decoding performance of communication systems. We exploit the ``denoise-and-generate'' characteristics of denoising diffusion probabilistic models (DDPM) for probabilistic constellation shaping. The key idea is to learn generating constellation symbols out of noise, ``mimicking'' the way the receiver performs symbol reconstruction. This way, we make the constellation symbols sent by the transmitter, and what is inferred (reconstructed) at the receiver become as similar as possible, resulting in as few mismatches as possible. Our results show that the generative AI-based scheme outperforms deep neural network (DNN)-based benchmark and uniform shaping, while providing network resilience as well as robust out-of-distribution performance under low-SNR regimes and non-Gaussian assumptions. Numerical evaluations highlight 30% improvement in terms of cosine similarity and a threefold improvement in terms of mutual information compared to DNN-based approach for 64-QAM geometry.
Abstract:Thanks to the outstanding achievements from state-of-the-art generative models like ChatGPT and diffusion models, generative AI has gained substantial attention across various industrial and academic domains. In this paper, denoising diffusion probabilistic models (DDPMs) are proposed for a practical finite-precision wireless communication system with hardware-impaired transceivers. The intuition behind DDPM is to decompose the data generation process over the so-called "denoising" steps. Inspired by this, a DDPM-based receiver is proposed for a practical wireless communication scheme that faces realistic non-idealities, including hardware impairments (HWI), channel distortions, and quantization errors. It is shown that our approach provides network resilience under low-SNR regimes, near-invariant reconstruction performance with respect to different HWI levels and quantization errors, and robust out-of-distribution performance against non-Gaussian noise. Moreover, the reconstruction performance of our scheme is evaluated in terms of cosine similarity and mean-squared error (MSE), highlighting more than 25 dB improvement compared to the conventional deep neural network (DNN)-based receivers.
Abstract:The fifth-generation (5G) of mobile communication supported by millimetre-wave (mmWave) technology and higher base station (BS) densification facilitate to enhance user equipment (UE) positioning. Therefore, 5G cellular system is designed with many positioning measurements and special positioning reference signals with a multitude of configurations for a variety of use cases, expecting stringent positioning accuracies. One of the major factors that the accuracy of a particular position estimate depends on is the geometry of the nodes in the system, which could be measured with the geometric dilution of precision (GDOP). Hence in this paper, we investigate the time difference of arrival (TDOA) measurements based UE positioning accuracy improvement, exploiting the geometric distribution of BSs in mixed LOS and NLOS environment. We propose a BS selection algorithm for UE positioning based on the GDOP of the BSs participating in the positioning process. Simulations are conducted for indoor and outdoor scenarios that use antenna arrays with beam-based mmWave NR communication. Results demonstrate that the proposed BS selection can achieve higher positioning accuracy with fewer radio resources compared to the other BS selection methods.