Abstract:Channel charting (CC) applies dimensionality reduction to channel state information (CSI) data at the infrastructure basestation side with the goal of extracting pseudo-position information for each user. The self-supervised nature of CC enables predictive tasks that depend on user position without requiring any ground-truth position information. In this work, we focus on the practically relevant streaming CSI data scenario, in which CSI is constantly estimated. To deal with storage limitations, we develop a novel streaming CC architecture that maintains a small core CSI dataset from which the channel charts are learned. Curation of the core CSI dataset is achieved using a min-max-similarity criterion. Numerical validation with measured CSI data demonstrates that our method approaches the accuracy obtained from the complete CSI dataset while using only a fraction of CSI storage and avoiding catastrophic forgetting of old CSI data.
Abstract:Channel charting is an emerging self-supervised method that maps channel state information (CSI) to a low-dimensional latent space, which represents pseudo-positions of user equipments (UEs). While this latent space preserves local geometry, i.e., nearby UEs are nearby in latent space, the pseudo-positions are in arbitrary coordinates and global geometry is not preserved. In order to enable channel charting in real-world coordinates, we propose a novel bilateration loss for multipoint wireless systems in which only the access point (AP) locations are known--no geometrical models or ground-truth UE position information is required. The idea behind this bilateration loss is to compare the received power at pairs of APs in order to determine whether a UE should be placed closer to one AP or the other in latent space. We demonstrate the efficacy of our method using channel vectors from a commercial ray-tracer.
Abstract:Orthogonal frequency-division multiplexing (OFDM) time-domain signals exhibit high peak-to-average (power) ratio (PAR), which requires linear radio-frequency chains to avoid an increase in error-vector magnitude (EVM) and out-of-band (OOB) emissions. In this paper, we propose a novel joint PAR reduction and precoding algorithm that relaxes these linearity requirements in massive multiuser (MU) multiple-input multiple-output (MIMO) wireless systems. Concretely, we develop a novel alternating projections method, which limits the PAR and transmit power increase while simultaneously suppressing MU interference. We provide a theoretical foundation of our algorithm and provide simulation results for a massive MU-MIMO-OFDM scenario. Our results demonstrate significant PAR reduction while limiting the transmit power, without causing EVM or OOB emissions.
Abstract:Recent channel state information (CSI)-based positioning pipelines rely on deep neural networks (DNNs) in order to learn a mapping from estimated CSI to position. Since real-world communication transceivers suffer from hardware impairments, CSI-based positioning systems typically rely on features that are designed by hand. In this paper, we propose a CSI-based positioning pipeline that directly takes raw CSI measurements and learns features using a structured DNN in order to generate probability maps describing the likelihood of the transmitter being at pre-defined grid points. To further improve the positioning accuracy of moving user equipments, we propose to fuse a time-series of learned CSI features or a time-series of probability maps. To demonstrate the efficacy of our methods, we perform experiments with real-world indoor line-of-sight (LoS) and non-LoS channel measurements. We show that CSI feature learning and time-series fusion can reduce the mean distance error by up to 2.5$\boldsymbol\times$ compared to the state-of-the-art.
Abstract:Massive multi-user multiple-input multiple-output (MU-MIMO) wireless systems operating at millimeter-wave (mmWave) frequencies enable simultaneous wideband data transmission to a large number of users. In order to reduce the complexity of MU precoding in all-digital basestation architectures, we propose a two-stage precoding architecture that first performs precoding using a sparse matrix in the beamspace domain, followed by an inverse fast Fourier transform that converts the result to the antenna domain. The sparse precoding matrix requires a small number of multipliers and enables regular hardware architectures, which allows the design of hardware-efficient all-digital precoders. Simulation results demonstrate that our methods approach the error-rate of conventional Wiener filter precoding with more than 2x reduced complexity.
Abstract:Wireless communication systems that rely on orthogonal frequency-division multiplexing (OFDM) suffer from a high peak-to-average (power) ratio (PAR), which necessitates power-inefficient radio-frequency (RF) chains to avoid an increase in error-vector magnitude (EVM) and out-of-band (OOB) emissions. The situation is further aggravated in massive multiuser (MU) multiple-input multiple-output (MIMO) systems that would require hundreds of linear RF chains. In this paper, we present a novel approach to joint precoding and PAR reduction that builds upon a novel $\ell^p\!-\!\ell^q$-norm formulation, which is able to find minimum PAR solutions while suppressing MU interference. We provide a theoretical underpinning of our approach and provide simulation results for a massive MU-MIMO-OFDM system that demonstrate significant reductions in PAR at low complexity, without causing an increase in EVM or OOB emissions.
Abstract:Beamspace processing is an emerging technique to reduce baseband complexity in massive multiuser (MU) multiple-input multiple-output (MIMO) communication systems operating at millimeter-wave (mmWave) and terahertz frequencies. The high directionality of wave propagation at such high frequencies ensures that only a small number of transmission paths exist between user equipments and basestation (BS). In order to resolve the sparse nature of wave propagation, beamspace processing traditionally computes a spatial discrete Fourier transform (DFT) across a uniform linear antenna array at the BS where each DFT output is associated with a specific beam. In this paper, we study optimality conditions of the DFT for sparsity-based beamspace processing with idealistic mmWave channel models and realistic channels. To this end, we propose two algorithms that learn unitary beamspace transforms using an $\ell^4$-norm-based sparsity measure, and we investigate their optimality theoretically and via simulations.