Abstract:Contemporary radio access networks employ link adaption (LA) algorithms to optimize the modulation and coding schemes to adapt to the prevailing propagation conditions and are near-optimal in terms of the achieved spectral efficiency. LA is a challenging task in the presence of mobility, fast fading, and imperfect channel quality information and limited knowledge of the receiver characteristics at the transmitter, which render model-based LA algorithms complex and suboptimal. Model-based LA is especially difficult as connected user equipment devices become increasingly heterogeneous in terms of receiver capabilities, antenna configurations and hardware characteristics. Recognizing these difficulties, previous works have proposed reinforcement learning (RL) for LA, which faces deployment difficulties due to their potential negative impacts on live performance. To address this challenge, this paper considers offline RL to learn LA policies from data acquired in live networks with minimal or no intrusive effects on the network operation. We propose three LA designs based on batch-constrained deep Q-learning, conservative Q-learning, and decision transformers, showing that offline RL algorithms can achieve performance of state-of-the-art online RL methods when data is collected with a proper behavioral policy.
Abstract:Artificial intelligence (AI) has emerged as a powerful tool for addressing complex and dynamic tasks in communication systems, where traditional rule-based algorithms often struggle. However, most AI applications to networking tasks are designed and trained for specific, limited conditions, hindering the algorithms from learning and adapting to generic situations, such as those met across radio access networks (RAN). This paper proposes design principles for sustainable and scalable AI integration in communication systems, focusing on creating AI algorithms that can generalize across network environments, intents, and control tasks. This approach enables a limited number of AI-driven RAN functions to tackle larger problems, improve system performance, and simplify lifecycle management. To achieve sustainability and automation, we introduce a scalable learning architecture that supports all deployed AI applications in the system. This architecture separates centralized learning functionalities from distributed actuation and inference functions, enabling efficient data collection and management, computational and storage resources optimization, and cost reduction. We illustrate these concepts by designing a generalized link adaptation algorithm, demonstrating the benefits of our proposed approach.
Abstract:Fifth generation cellular systems support a broad range of services, including mobile broadband, critical and massive Internet of Things and are used in a variety of scenarios. In many of these scenarios, the main challenge is maintaining high throughput and ensuring proper quality of service (QoS) in irregular topologies. In multiple input multiple output systems, this challenge translates to designing linear transmit and receive beamformers that maximize the system throughput and manage QoS constraints. In this paper, we argue that this basic design task in 5G and beyond systems must be extended such that beamforming design and user scheduling are managed jointly. Specifically, we propose a fully decentralized joint beamforming design and user scheduling algorithm that manages QoS. A novel feature of this scheme is its ability to reduce the initial rate requirements in case of infeasibility. By means of simulations that model contemporary 5G scenarios, we show that the proposed decentralized scheme outperforms benchmarking algorithms that do not support minimum rate requirements and previously proposed algorithms that support QoS requirements.