Abstract:Accurate localization in indoor environments is a challenge due to the Non Line of Sight (NLoS) nature of the signaling. In this paper, we explore the use of AI/ML techniques for positioning accuracy enhancement in Indoor Factory (InF) scenarios. The proposed neural network, which we term LocNet, is trained on measurements such as Channel Impulse Response (CIR) and Reference Signal Received Power (RSRP) from multiple Transmit Receive Points (TRPs). Simulation results show that when using measurements from 18 TRPs, LocNet achieves a 9 cm positioning accuracy at the 90th percentile. Additionally, we demonstrate that the same model generalizes effectively even when measurements from some TRPs randomly become unavailable. Lastly, we provide insights on the robustness of the trained model to the errors in ground truth labels used for training.
Abstract:This research looks at using AI/ML to achieve centimeter-level user positioning in 6G applications such as the Industrial Internet of Things (IIoT). Initial results show that our AI/ML-based method can estimate user positions with an accuracy of 17 cm in an indoor factory environment. In this proposal, we highlight our approaches and future directions.
Abstract:Ensuring adequate wireless coverage in upcoming communication technologies such as 6G is expected to be challenging. This is because user demands of higher datarate require an increase in carrier frequencies, which in turn reduce the diffraction effects (and hence coverage) in complex multipath environments. Intelligent reflecting surfaces have been proposed as a way of restoring coverage by adaptively reflecting incoming electromagnetic waves in desired directions. This is accomplished by judiciously adding extra phases at different points on the surface. In practice, these extra phases are only available in discrete quantities due to hardware constraints. Computing these extra phases is computationally challenging when they can only be picked from a discrete distribution, and existing approaches for solving this problem were either heuristic or based on evolutionary algorithms. We solve this problem by proposing fast algorithms with provably optimal solutions. Our algorithms have linear complexity, and are presented with rigorous proofs for their optimality. We show that the proposed algorithms exhibit better performance. We analyze situations when unwanted grating lobes arise in the radiation pattern, and discuss mitigation strategies, such as the use of triangular lattices and prephasing techniques, to eliminate them. We also demonstrate how our algorithms can leverage these techniques to deliver optimum beamforming solutions.
Abstract:Random Access is an important step in enabling the initial attachment of a User Equipment (UE) to a Base Station (gNB). The UE identifies itself by embedding a Preamble Index (RAPID) in the phase rotation of a known base sequence, which it transmits on the Physical Random Access Channel (PRACH). The signal on the PRACH also enables the estimation of propagation delay, often known as Timing Advance (TA), which is induced by virtue of the UE's position. Traditional receivers estimate the RAPID and TA using correlation-based techniques. This paper presents an alternative receiver approach that uses AI/ML models, wherein two neural networks are proposed, one for the RAPID and one for the TA. Different from other works, these two models can run in parallel as opposed to sequentially. Experiments with both simulated data and over-the-air hardware captures highlight the improved performance of the proposed AI/ML-based techniques compared to conventional correlation methods.
Abstract:Massive MIMO antennas in cellular systems help support a large number of users in the same time-frequency resource and also provide significant array gain for uplink reception. However, channel estimation in such large antenna systems can be tricky, not only since pilot assignment for multiple users is challenging, but also because the pilot overhead especially for rapidly changing channels can diminish the system throughput quite significantly. A pilotless transceiver where the receiver can perform blind demodulation can solve these issues and boost system throughput by eliminating the need for pilots in channel estimation. In this paper, we propose an iterative matrix decomposition algorithm for the blind demodulation of massive MIMO OFDM signals. This new decomposition technique provides estimates of both the user symbols and the user channel in the frequency domain simultaneously (to a scaling factor) without any pilots. Simulation results demonstrate that the lack of pilots does not affect the error performance of the proposed algorithm when compared to maximal-ratio-combining (MRC) with pilot-based channel estimation across a wide range of signal strengths.
Abstract:Massive MIMO opens up attractive possibilities for next generation wireless systems with its large number of antennas offering spatial diversity and multiplexing gain. However, the fronthaul link that connects a massive MIMO Remote Radio Head (RRH) and carries IQ samples to the Baseband Unit (BBU) of the base station can throttle the network capacity/speed if appropriate data compression techniques are not applied. In this paper, we propose an iterative technique for fronthaul load reduction in the uplink for massive MIMO systems that utilizes the convolution structure of the received signals. We use an alternating minimisation algorithm for blind deconvolution of the received data matrix that provides compression ratios of 30-50. In addition, the technique presented here can be used for blind decoding of OFDM signals in massive MIMO systems.