Abstract:Fifth-generation (5G) New Radio (NR) cellular networks support a wide range of new services, many of which require an application-specific quality of service (QoS), e.g. in terms of a guaranteed minimum bit-rate or a maximum tolerable delay. Therefore, scheduling multiple parallel data flows, each serving a unique application instance, is bound to become an even more challenging task compared to the previous generations. Leveraging recent advances in deep reinforcement learning, in this paper, we propose a QoS-Aware Deep Reinforcement learning Agent (QADRA) scheduler for NR networks. In contrast to state-of-the-art scheduling heuristics, the QADRA scheduler explicitly optimizes for the QoS satisfaction rate while simultaneously maximizing the network performance. Moreover, we train our algorithm end-to-end on these objectives. We evaluate QADRA in a full scale, near-product, system level NR simulator and demonstrate a significant boost in network performance. In our particular evaluation scenario, the QADRA scheduler improves network throughput by 30% while simultaneously maintaining the QoS satisfaction rate of VoIP users served by the network, compared to state-of-the-art baselines.
Abstract:Link adaptation (LA) optimizes the selection of modulation and coding schemes (MCS) for a stochastic wireless channel. The classical outer loop LA (OLLA) tracks the channel's signal-to-noise-and-interference ratio (SINR) based on the observed transmission outcomes. On the other hand, recent Reinforcement learning LA (RLLA) schemes sample the available MCSs to optimize the link performance objective. However, both OLLA and RLLA rely on tuning parameters that are challenging to configure. Further, OLLA optimizes for a target block error rate (BLER) that only indirectly relates to the common throughput-maximization objective, while RLLA does not fully exploit the inter-dependence between the MCSs. In this paper, we propose latent Thompson Sampling for LA (LTSLA), a RLLA scheme that does not require configuration tuning, and which fully exploits MCS inter-dependence for efficient learning. LTSLA models an SINR probability distribution for MCS selection, and refines this distribution through Bayesian updates with the transmission outcomes. LTSLA also automatically adapts to different channel fading profiles by utilizing their respective Doppler estimates. We perform simulation studies of LTSLA along with OLLA and RLLA schemes for frequency selective fading channels. Numerical results demonstrate that LTSLA improves the instantaneous link throughout by up to 50% compared to existing schemes.
Abstract:We address multi-armed bandits (MAB) where the objective is to maximize the cumulative reward under a probabilistic linear constraint. For a few real-world instances of this problem, constrained extensions of the well-known Thompson Sampling (TS) heuristic have recently been proposed. However, finite-time analysis of constrained TS is challenging; as a result, only O(\sqrt{T}) bounds on the cumulative reward loss (i.e., the regret) are available. In this paper, we describe LinConTS, a TS-based algorithm for bandits that place a linear constraint on the probability of earning a reward in every round. We show that for LinConTS, the regret as well as the cumulative constraint violations are upper bounded by O(\log T) for the suboptimal arms. We develop a proof technique that relies on careful analysis of the dual problem and combine it with recent theoretical work on unconstrained TS. Through numerical experiments on two real-world datasets, we demonstrate that LinConTS outperforms an asymptotically optimal upper confidence bound (UCB) scheme in terms of simultaneously minimizing the regret and the violation.
Abstract:Wireless communication systems operate in complex time-varying environments. Therefore, selecting the optimal configuration parameters in these systems is a challenging problem. For wireless links, rate selection is used to select the optimal data transmission rate that maximizes the link throughput subject to an application-defined latency constraint. We model rate selection as a stochastic multi-armed bandit (MAB) problem, where a finite set of transmission rates are modeled as independent bandit arms. For this setup, we propose Con-TS, a novel constrained version of the Thompson sampling algorithm, where the latency requirement is modeled by a linear constraint on arm selection probabilities. Since our algorithm learns a Bayesian model of the wireless link, it can be adapted to exploit prior knowledge often available in practical wireless networks. Through numerical results from simulated experiments, we demonstrate that Con-TS significantly outperforms state-of-the-art bandit algorithms proposed in the literature. Further, we compare Con-TS with the outer loop link adaptation (OLLA) scheme, which is the state-of-the-art in practical wireless networks and relies on carefully tuned offline link models. We show that Con-TS outperforms OLLA in simulations, further, it can elegantly incorporate information from the offline link models to substantially improve performance.