Abstract:Utilizing deep learning (DL) techniques for radio-based positioning of user equipment (UE) through channel state information (CSI) fingerprints has demonstrated significant potential. DL models can extract complex characteristics from the CSI fingerprints of a particular environment and accurately predict the position of a UE. Nonetheless, the effectiveness of the DL model trained on CSI fingerprints is highly dependent on the particular training environment, limiting the trained model's applicability across different environments. This paper proposes a novel DL model structure consisting of two parts, where the first part aims at identifying features that are independent from any specific environment, while the second part combines those features in an environment specific way with the goal of positioning. To train such a two-part model, we propose the multi-environment meta-learning (MEML) approach for the first part to facilitate training across various environments, while the second part of the model is trained solely on data from a specific environment. Our findings indicate that employing the MEML approach for initializing the weights of the DL model for a new unseen environment significantly boosts the accuracy of UE positioning in the new target environment as well the reliability of its uncertainty estimation. This method outperforms traditional transfer learning methods, whether direct transfer learning (DTL) between environments or completely training from scratch with data from a new environment. The proposed approach is verified with real measurements for both line-of-sight (LOS) and non-LOS (NLOS) environments.
Abstract:Deep learning (DL) methods have been shown to improve the performance of several use cases for the fifth-generation (5G) New radio (NR) air interface. In this paper we investigate user equipment (UE) positioning using the channel state information (CSI) fingerprints between a UE and multiple base stations (BSs). In such a setup, a single DL model can be trained for UE positioning using the CSI fingerprints of the multiple BSs as input. Alternatively, based on the CSI at each BS, a separate DL model can be trained at each BS and then the output of the different models are combined to determine the UE's position. In this work we compare these different fusion techniques and show that fusing the output of separate models achieves higher positioning accuracy, especially in a dynamic scenario. We also show that the fusion of multiple outputs further benefits from considering the uncertainty of the output of the DL model at each BS. For a more efficient training of the DL model across BSs, we additionally propose a multi-task learning (MTL) scheme by sharing some parameters across the models while jointly training all models. This method, not only improves the accuracy of the individual models, but also of the final combined estimate. Lastly, we evaluate the reliability of the uncertainty estimation to ascertain which of the fusion methods provides the highest quality of uncertainty estimates.