Abstract:In modern cell-less wireless networks, mobility management is undergoing a significant transformation, transitioning from single-link handover management to a more adaptable multi-connectivity cluster reconfiguration approach, including often conflicting objectives like energy-efficient power allocation and satisfying varying reliability requirements. In this work, we address the challenge of dynamic clustering and power allocation for unmanned aerial vehicle (UAV) communication in wireless interference networks. Our objective encompasses meeting varying reliability demands, minimizing power consumption, and reducing the frequency of cluster reconfiguration. To achieve these objectives, we introduce a novel approach based on reinforcement learning using a masked soft actor-critic algorithm, specifically tailored for dynamic clustering and power allocation.
Abstract:Modern communication systems need to fulfill multiple and often conflicting objectives at the same time. In particular, new applications require high reliability while operating at low transmit powers. Moreover, reliability constraints may vary over time depending on the current state of the system. One solution to address this problem is to use joint transmissions from a number of base stations (BSs) to meet the reliability requirements. However, this approach is inefficient when considering the overall total transmit power. In this work, we propose a reinforcement learning-based power allocation scheme for an unmanned aerial vehicle (UAV) communication system with varying communication reliability requirements. In particular, the proposed scheme aims to minimize the total transmit power of all BSs while achieving an outage probability that is less than a tolerated threshold. This threshold varies over time, e.g., when the UAV enters a critical zone with high-reliability requirements. Our results show that the proposed learning scheme uses dynamic power allocation to meet varying reliability requirements, thus effectively conserving energy.