Abstract:Wireless baseband processing (WBP) is a key element of wireless communications, with a series of signal processing modules to improve data throughput and counter channel fading. Conventional hardware solutions, such as digital signal processors (DSPs) and more recently, graphic processing units (GPUs), provide various degrees of parallelism, yet they both fail to take into account the cyclical and consecutive character of WBP. Furthermore, the large amount of data in WBPs cannot be processed quickly in symmetric multiprocessors (SMPs) due to the unpredictability of memory latency. To address this issue, we propose a hierarchical dataflow-driven architecture to accelerate WBP. A pack-and-ship approach is presented under a non-uniform memory access (NUMA) architecture to allow the subordinate tiles to operate in a bundled access and execute manner. We also propose a multi-level dataflow model and the related scheduling scheme to manage and allocate the heterogeneous hardware resources. Experiment results demonstrate that our prototype achieves $2\times$ and $2.3\times$ speedup in terms of normalized throughput and single-tile clock cycles compared with GPU and DSP counterparts in several critical WBP benchmarks. Additionally, a link-level throughput of $288$ Mbps can be achieved with a $45$-core configuration.
Abstract:Network slicing is a critical driver for guaranteeing the diverse service level agreements (SLA) in 5G and future networks. Recently, deep reinforcement learning (DRL) has been widely utilized for resource allocation in network slicing. However, existing related works do not consider the performance loss associated with the initial exploration phase of DRL. This paper proposes a new performance-guaranteed slicing strategy with a soft and hard hybrid slicing setting. Mainly, a common slice setting is applied to guarantee slices' SLA when training the neural network. Moreover, the resource of the common slice tends to precisely redistribute to slices with the training of DRL until it converges. Furthermore, experiment results confirm the effectiveness of our proposed slicing framework: the slices' SLA of the training phase can be guaranteed, and the proposed algorithm can achieve the near-optimal performance in terms of the SLA satisfaction ratio, isolation degree and spectrum maximization after convergence.
Abstract:Ultra-Reliable and Low-Latency Communications (URLLC) services in vehicular networks on millimeter-wave bands present a significant challenge, considering the necessity of constantly adjusting the beam directions. Conventional methods are mostly based on classical control theory, e.g., Kalman filter and its variations, which mainly deal with stationary scenarios. Therefore, severe application limitations exist, especially with complicated, dynamic Vehicle-to-Everything (V2X) channels. This paper gives a thorough study of this subject, by first modifying the classical approaches, e.g., Extended Kalman Filter (EKF) and Particle Filter (PF), for non-stationary scenarios, and then proposing a Reinforcement Learning (RL)-based approach that can achieve the URLLC requirements in a typical intersection scenario. Simulation results based on a commercial ray-tracing simulator show that enhanced EKF and PF methods achieve packet delay more than $10$ ms, whereas the proposed deep RL-based method can reduce the latency to about $6$ ms, by extracting context information from the training data.
Abstract:Future wireless access networks need to support diversified quality of service (QoS) metrics required by various types of Internet-of-Things (IoT) devices, e.g., age of information (AoI) for status generating sources and ultra low latency for safety information in vehicular networks. In this paper, a novel inner-state driven random access (ISDA) framework is proposed based on distributed policy learning, in particular a cross-entropy method. Conventional random access schemes, e.g., $p$-CSMA, assume state-less terminals, and thus assigning equal priorities to all. In ISDA, the inner-states of terminals are described by a time-varying state vector, and the transmission probabilities of terminals in the contention period are determined by their respective inner-states. Neural networks are leveraged to approximate the function mappings from inner-states to transmission probabilities, and an iterative approach is adopted to improve these mappings in a distributed manner. Experiment results show that ISDA can improve the QoS of heterogeneous terminals simultaneously compared to conventional CSMA schemes.
Abstract:Timely and accurate knowledge of channel state information (CSI) is necessary to support scheduling operations at both physical and network layers. In order to support pilot-free channel estimation in cell sleeping scenarios, we propose to adopt a channel database that stores the CSI as a function of geographic locations. Such a channel database is generated from historical user records, which usually can not cover all the locations in the cell. Therefore, we develop a two-step interpolation method to infer the channels at the uncovered locations. The method firstly applies the K-nearest-neighbor method to form a coarse database and then refines it with a deep convolutional neural network. When applied to the channel data generated by ray tracing software, our method shows a great advantage in performance over the conventional interpolation methods.
Abstract:Channel state information (CSI) is of vital importance in wireless communication systems. Existing CSI acquisition methods usually rely on pilot transmissions, and geographically separated base stations (BSs) with non-correlated CSI need to be assigned with orthogonal pilots which occupy excessive system resources. Our previous work adopts a data-driven deep learning based approach which leverages the CSI at a local BS to infer the CSI remotely, however the relevance of CSI between separated BSs is not specified explicitly. In this paper, we exploit a model-based methodology to derive the Cram\'er-Rao lower bound (CRLB) of remote CSI inference given the local CSI. Although the model is simplified, the derived CRLB explicitly illustrates the relationship between the inference performance and several key system parameters, e.g., terminal distance and antenna array size. In particular, it shows that by leveraging multiple local BSs, the inference error exhibits a larger power-law decay rate (w.r.t. number of antennas), compared with a single local BS; this explains and validates our findings in evaluating the deep-neural-network-based (DNN-based) CSI inference. We further improve on the DNN-based method by employing dropout and deeper networks, and show an inference performance of approximately $90\%$ accuracy in a realistic scenario with CSI generated by a ray-tracing simulator.
Abstract:In this paper, we propose a learning-based low-overhead beam alignment method for vehicle-to-infrastructure communication in vehicular networks. The main idea is to remotely infer the optimal beam directions at a target base station in future time slots, based on the CSI of a source base station in previous time slots. The proposed scheme can reduce channel acquisition and beam training overhead by replacing pilot-aided beam training with online inference from a sequence-to-sequence neural network. Simulation results based on ray-tracing channel data show that our proposed scheme achieves a $8.86\%$ improvement over location-based beamforming schemes with a positioning error of $1$m, and is within a $4.93\%$ performance loss compared with the genie-aided optimal beamformer.
Abstract:Knowledge of information about the propagation channel in which a wireless system operates enables better, more efficient approaches for signal transmissions. Therefore, channel state information (CSI) plays a pivotal role in the system performance. The importance of CSI is in fact growing in the upcoming 5G and beyond systems, e.g., for the implementation of massive multiple-input multiple-output (MIMO). However, the acquisition of timely and accurate CSI has long been considered as a major issue, and becomes increasingly challenging due to the need for obtaining CSI of many antenna elements in massive MIMO systems. To cope with this challenge, existing works mainly focus on exploiting linear structures of CSI, such as CSI correlations in the spatial domain, to achieve dimensionality reduction. In this article, we first systematically review the state-of-the-art on CSI structure exploitation; then extend to seek for deeper structures that enable remote CSI inference wherein a data-driven deep neural network (DNN) approach is necessary due to model inadequacy. We develop specific DNN designs suitable for CSI data. Case studies are provided to demonstrate great potential in this direction for future performance enhancement.