Abstract:Owing to the ubiquity of cellular communication signals, positioning with the fifth generation (5G) signal has emerged as a promising solution in global navigation satellite system-denied areas. Unfortunately, although the widely employed antenna arrays in 5G remote radio units (RRUs) facilitate the measurement of the direction of arrival (DOA), DOA-based positioning performance is severely degraded by array errors. This paper proposes an in-situ calibration framework with a user terminal transmitting 5G reference signals at several known positions in the actual operating environment and the accessible RRUs estimating their array errors from these reference signals. Further, since sub-6GHz small-cell RRUs deployed for indoor coverage generally have small-aperture antenna arrays, while 5G signals have plentiful bandwidth resources, this work segregates the multipath components via super-resolution delay estimation based on the maximum likelihood criteria. This differs significantly from existing in-situ calibration works which resolve multipaths in the spatial domain. The superiority of the proposed method is first verified by numerical simulations. We then demonstrate via field test with commercial 5G equipment that, a reduction of 46.7% for 1-${\sigma}$ DOA estimation error can be achieved by in-situ calibration using the proposed method.
Abstract:Massive arrays deployed in millimeter-wave systems enable high angular resolution performance, which in turn facilitates sub-meter localization services. Albeit suboptimal, up to now the most popular localization approach has been based on a so-called two-step procedure, where triangulation is applied upon aggregation of the angle-of-arrival (AoA) measurements from the collaborative base stations. This is mainly due to the prohibitive computational cost of the existing direct localization approaches in large-scale systems. To address this issue, we propose a deep unfolding based fast direct localization solver. First, the direct localization is formulated as a joint $l_1$-$l_{2,1}$ norm sparse recovery problem, which is then solved by using alternating direction method of multipliers (ADMM). Next, we develop a deep ADMM unfolding network (DAUN) to learn the ADMM parameter settings from the training data and a position refinement algorithm is proposed for DAUN. Finally, simulation results showcase the superiority of the proposed DAUN over the baseline solvers in terms of better localization accuracy, faster convergence and significantly lower computational complexity.
Abstract:Channel-state-information-based localization in 5G networks has been a promising way to obtain highly accurate positions compared to previous communication networks. However, there is no unified and effective platform to support the research on 5G localization algorithms. This paper releases a link-level simulator for 5G localization, which can depict realistic physical behaviors of the 5G positioning signal transmission. Specifically, we first develop a simulation architecture considering more elaborate parameter configuration and physical-layer processing. The architecture supports the link modeling at sub-6GHz and millimeter-wave (mmWave) frequency bands. Subsequently, the critical physical-layer components that determine the localization performance are designed and integrated. In particular, a lightweight new-radio channel model and hardware impairment functions that significantly limit the parameter estimation accuracy are developed. Finally, we present three application cases to evaluate the simulator, i.e. two-dimensional mobile terminal localization, mmWave beam sweeping, and beamforming-based angle estimation. The numerical results in the application cases present the performance diversity of localization algorithms in various impairment conditions.
Abstract:Nowadays wireless communication is rapidly reshaping entire industry sectors. In particular, mobile edge computing (MEC) as an enabling technology for industrial Internet of things (IIoT) brings powerful computing/storage infrastructure closer to the mobile terminals and, thereby, significant lowers the response latency. To reap the benefit of proactive caching at the network edge, precise knowledge on the popularity pattern among the end devices is essential. However, the complex and dynamic nature of the content popularity over space and time as well as the data-privacy requirements in many IIoT scenarios pose tough challenges to its acquisition. In this article, we propose an unsupervised and privacy-preserving popularity prediction framework for MEC-enabled IIoT. The concepts of local and global popularities are introduced and the time-varying popularity of each user is modelled as a model-free Markov chain. On this basis, a novel unsupervised recurrent federated learning (URFL) algorithm is proposed to predict the distributed popularity while achieve privacy preservation and unsupervised training. Simulations indicate that the proposed framework can enhance the prediction accuracy in terms of a reduced root-mean-squared error by up to $60.5\%-68.7\%$. Additionally, manual labeling and violation of users' data privacy are both avoided.
Abstract:The problem of beam alignment and tracking in high mobility scenarios such as high-speed railway (HSR) becomes extremely challenging, since large overhead cost and significant time delay are introduced for fast time-varying channel estimation. To tackle this challenge, we propose a learning-aided beam prediction scheme for HSR networks, which predicts the beam directions and the channel amplitudes within a period of future time with fine time granularity, using a group of observations. Concretely, we transform the problem of high-dimensional beam prediction into a two-stage task, i.e., a low-dimensional parameter estimation and a cascaded hybrid beamforming operation. In the first stage, the location and speed of a certain terminal are estimated by maximum likelihood criterion, and a data-driven data fusion module is designed to improve the final estimation accuracy and robustness. Then, the probable future beam directions and channel amplitudes are predicted, based on the HSR scenario priors including deterministic trajectory, motion model, and channel model. Furthermore, we incorporate a learnable non-linear mapping module into the overall beam prediction to allow non-linear tracks. Both of the proposed learnable modules are model-based and have a good interpretability. Compared to the existing beam management scheme, the proposed beam prediction has (near) zero overhead cost and time delay. Simulation results verify the effectiveness of the proposed scheme.
Abstract:State-of-the-art schemes for performance analysis and optimization of multiple-input multiple-output systems generally experience degradation or even become invalid in dynamic complex scenarios with unknown interference and channel state information (CSI) uncertainty. To adapt to the challenging settings and better accomplish these network auto-tuning tasks, we propose a generic learnable model-driven framework in this paper. To explain how the proposed framework works, we consider regularized zero-forcing precoding as a usage instance and design a light-weight neural network for refined prediction of sum rate and detection error based on coarse model-driven approximations. Then, we estimate the CSI uncertainty on the learned predictor in an iterative manner and, on this basis, optimize the transmit regularization term and subsequent receive power scaling factors. A deep unfolded projected gradient descent based algorithm is proposed for power scaling, which achieves favorable trade-off between convergence rate and robustness.
Abstract:The ubiquity, large bandwidth, and spatial diversity of the fifth generation (5G) cellular signal render it a promising candidate for accurate positioning in indoor environments where the global navigation satellite system (GNSS) signal is absent. In this paper, a joint angle and delay estimation (JADE) scheme is designed for 5G picocell base stations (gNBs) which addresses two crucial issues to make it both effective and efficient in realistic indoor environments. Firstly, the direction-dependence of the array modeling error for picocell gNB as well as its impact on JADE is revealed. This error is mitigated by fitting the array response measurements to a vector-valued function and pre-calibrating the ideal steering-vector with the fitted function. Secondly, based on the deployment reality that 5G picocell gNBs only have a small-scale antenna array but have a large signal bandwidth, the proposed scheme decouples the estimation of time-of-arrival (TOA) and direction-of-arrival (DOA) to reduce the huge complexity induced by two-dimensional joint processing. It employs the iterative-adaptive-approach (IAA) to resolve multipath signals in the TOA domain, followed by a conventional beamformer (CBF) to retrieve the desired line-of-sight DOA. By further exploiting a dimension-reducing pre-processing module and accelerating spectrum computing by fast Fourier transforms, an efficient implementation is achieved for real-time JADE. Numerical simulations demonstrate the superiority of the proposed method in terms of DOA estimation accuracy. Field tests show that a triangulation positioning error of 0.44 m is achieved for 90% cases using only DOAs estimated at two separated receiving points.
Abstract:In this paper, we address the problem of joint direction-of-arrival (DoA) and range estimation using frequency diverse coprime array (FDCA). By incorporating the coprime array structure and coprime frequency offsets, a two-dimensional space-frequency virtual difference coarray corresponding to uniform array and uniform frequency offset is considered to increase the number of degrees-of-freedom (DoFs). However, the reconstruction of the doubly-Toeplitz covariance matrix is computationally prohibitive. To solve this problem, we propose an interpolation algorithm based on decoupled atomic norm minimization (DANM), which converts the coarray signal to a simple matrix form. On this basis, a relaxation-based optimization problem is formulated to achieve joint DoA-range estimation with enhanced DoFs. The reconstructed coarray signal enables application of existing subspace-based spectral estimation methods. The proposed DANM problem is further reformulated as an equivalent rank-minimization problem which is solved by cyclic rank minimization. This approach avoids the approximation errors introduced in nuclear norm-based approach, thereby achieving superior root-mean-square error which is closer to the Cramer-Rao bound. The effectiveness of proposed method is confirmed by theoretical analyses and numerical simulations.
Abstract:Mobile edge computing (MEC) is a prominent computing paradigm which expands the application fields of wireless communication. Due to the limitation of the capacities of user equipments and MEC servers, edge caching (EC) optimization is crucial to the effective utilization of the caching resources in MEC-enabled wireless networks. However, the dynamics and complexities of content popularities over space and time as well as the privacy preservation of users pose significant challenges to EC optimization. In this paper, a privacy-preserving distributed deep deterministic policy gradient (P2D3PG) algorithm is proposed to maximize the cache hit rates of devices in the MEC networks. Specifically, we consider the fact that content popularities are dynamic, complicated and unobservable, and formulate the maximization of cache hit rates on devices as distributed problems under the constraints of privacy preservation. In particular, we convert the distributed optimizations into distributed model-free Markov decision process problems and then introduce a privacy-preserving federated learning method for popularity prediction. Subsequently, a P2D3PG algorithm is developed based on distributed reinforcement learning to solve the distributed problems. Simulation results demonstrate the superiority of the proposed approach in improving EC hit rate over the baseline methods while preserving user privacy.
Abstract:Virtual reality (VR) is promising to fundamentally transform a broad spectrum of industry sectors and the way humans interact with virtual content. However, despite unprecedented progress, current networking and computing infrastructures are incompetent to unlock VR's full potential. In this paper, we consider delivering the wireless multi-tile VR video service over a mobile edge computing (MEC) network. The primary goal is to minimize the system latency/energy consumption and to arrive at a tradeoff thereof. To this end, we first cast the time-varying view popularity as a model-free Markov chain to effectively capture its dynamic characteristics. After jointly assessing the caching and computing capacities on both the MEC server and the VR playback device, a hybrid policy is then implemented to coordinate the dynamic caching replacement and the deterministic offloading, so as to fully utilize the system resources. The underlying multi-objective problem is reformulated as a partially observable Markov decision process, and a deep deterministic policy gradient algorithm is proposed to iteratively learn its solution, where a long short-term memory neural network is embedded to continuously predict the dynamics of the unobservable popularity. Simulation results demonstrate the superiority of the proposed scheme in achieving a trade-off between the energy efficiency and the latency reduction over the baseline methods.