Abstract:The advent of Ultra-Reliable Low Latency Communication (URLLC) alongside the emergence of Open RAN (ORAN) architectures presents unprecedented challenges and opportunities in Radio Resource Management (RRM) for next-generation communication systems. This paper presents a comprehensive trade-off analysis of Deep Reinforcement Learning (DRL) approaches designed to enhance URLLC performance within ORAN's flexible and dynamic framework. By investigating various DRL strategies for optimising RRM parameters, we explore the intricate balance between reliability, latency, and the newfound adaptability afforded by ORAN principles. Through extensive simulation results, our study compares the efficacy of different DRL models in achieving URLLC objectives in an ORAN context, highlighting the potential of DRL to navigate the complexities introduced by ORAN. The proposed study provides valuable insights into the practical implementation of DRL-based RRM solutions in ORAN-enabled wireless networks. It sheds light on the benefits and challenges of integrating DRL and ORAN for URLLC enhancements. Our findings contribute to the ongoing discourse on advancements in URLLC and ORAN, offering a roadmap for future research to pursue efficient, reliable, and flexible communication systems.
Abstract:This work addresses resource allocation challenges in multi-cell wireless systems catering to enhanced Mobile Broadband (eMBB) and Ultra-Reliable Low Latency Communications (URLLC) users. We present a distributed learning framework tailored to O-RAN network architectures. Leveraging a Thompson sampling-based Deep Reinforcement Learning (DRL) algorithm, our approach provides real-time resource allocation decisions, aligning with evolving network structures. The proposed approach facilitates online decision-making for resource allocation by deploying trained execution agents at Near-Real Time Radio Access Network Intelligent Controllers (Near-RT RICs) located at network edges. Simulation results demonstrate the algorithm's effectiveness in meeting Quality of Service (QoS) requirements for both eMBB and URLLC users, offering insights into optimising resource utilisation in dynamic wireless environments.
Abstract:In the rapidly evolving landscape of 5G and beyond, cloud-native Open Radio Access Networks (O-RAN) present a paradigm shift towards intelligent, flexible, and sustainable network operations. This study addresses the intricate challenge of energy efficient (EE) resource allocation that services both enhanced Mobile Broadband (eMBB) and ultra-reliable low-latency communications (URLLC) users. We propose a novel distributed learning framework leveraging on-policy and off-policy transfer learning strategies within a deep reinforcement learning (DRL)--based model to facilitate online resource allocation decisions under different channel conditions. The simulation results explain the efficacy of the proposed method, which rapidly adapts to dynamic network states, thereby achieving a green resource allocation.