Charlie
Abstract:High-frequency wide-bandwidth cellular communications over mmW and sub-THz offer the opportunity for high data rates, however, it also presents high pathloss, resulting in limited coverage. To mitigate the coverage limitations, high-gain beamforming is essential. Implementation of beamforming involves a large number of antennas, which introduces analog beam constraint, i.e., only one frequency-flat beam is generated per transceiver chain (TRx). Recently introduced joint phase-time array (JPTA) architecture, which utilizes both true time delay (TTD) units and phase shifters (PSs), alleviates analog beam constraint by creating multiple frequency-dependent beams per TRx, for scheduling multiple users at different directions in a frequency-division manner. One class of previous studies offered solutions with "rainbow" beams, which tend to allocate a small bandwidth per beam direction. Another class focused on uniform linear array (ULA) antenna architecture, whose frequency-dependent beams were designed along a single axis of either azimuth or elevation direction. In this paper, we present a novel 3D beamforming codebook design aimed at maximizing beamforming gain to steer radiation toward desired azimuth and elevation directions, as well as across sub-bands partitioned according to scheduled users' bandwidth requirements. We provide both analytical solutions and iterative algorithms to design the PSs and TTD units for a desired subband beam pattern. Through simulations of the beamforming gain, we observe that our proposed solutions outperform the state-of-the-art solutions reported elsewhere.
Abstract:Hybrid beamforming is an attractive solution to build cost-effective and energy-efficient transceivers for millimeter-wave and terahertz systems. However, conventional hybrid beamforming techniques rely on analog components that generate a frequency flat response such as phase-shifters and switches, which limits the flexibility of the achievable beam patterns. As a novel alternative, this paper proposes a new class of hybrid beamforming called Joint phase-time arrays (JPTA), that additionally use true-time delay elements in the analog beamforming to create frequency-dependent analog beams. Using as an example two important frequency-dependent beam behaviors, the numerous benefits of such flexibility are exemplified. Subsequently, the JPTA beamformer design problem to generate any desired beam behavior is formulated and near-optimal algorithms to the problem are proposed. Simulations show that the proposed algorithms can outperform heuristics solutions for JPTA beamformer update. Furthermore, it is shown that JPTA can achieve the two exemplified beam behaviors with one radio-frequency chain, while conventional hybrid beamforming requires the radio-frequency chains to scale with the number of antennas to achieve similar performance. Finally, a wide range of problems to further tap into the potential of JPTA are also listed as future directions.
Abstract:Due to its ubiquitous and contact-free nature, the use of WiFi infrastructure for performing sensing tasks has tremendous potential. However, the channel state information (CSI) measured by a WiFi receiver suffers from errors in both its gain and phase, which can significantly hinder sensing tasks. By analyzing these errors from different WiFi receivers, a mathematical model for these gain and phase errors is developed in this work. Based on these models, several theoretically justified preprocessing algorithms for correcting such errors at a receiver and, thus, obtaining clean CSI are presented. Simulation results show that at typical system parameters, the developed algorithms for cleaning CSI can reduce noise by $40$% and $200$%, respectively, compared to baseline methods for gain correction and phase correction, without significantly impacting computational cost. The superiority of the proposed methods is also validated in a real-world test bed for respiration rate monitoring (an exemplary sensing task), where they improve the estimation signal-to-noise ratio by $20$% compared to baseline methods.
Abstract:In this paper, we investigate learning-based MIMO-OFDM symbol detection strategies focusing on a special recurrent neural network (RNN) -- reservoir computing (RC). We first introduce the Time-Frequency RC to take advantage of the structural information inherent in OFDM signals. Using the time domain RC and the time-frequency RC as the building blocks, we provide two extensions of the shallow RC to RCNet: 1) Stacking multiple time domain RCs; 2) Stacking multiple time-frequency RCs into a deep structure. The combination of RNN dynamics, the time-frequency structure of MIMO-OFDM signals, and the deep network enables RCNet to handle the interference and nonlinear distortion of MIMO-OFDM signals to outperform existing methods. Unlike most existing NN-based detection strategies, RCNet is also shown to provide a good generalization performance even with a limited training set (i.e, similar amount of reference signals/training as standard model-based approaches). Numerical experiments demonstrate that the introduced RCNet can offer a faster learning convergence and as much as 20% gain in bit error rate over a shallow RC structure by compensating for the nonlinear distortion of the MIMO-OFDM signal, such as due to power amplifier compression in the transmitter or due to finite quantization resolution in the receiver.