Abstract:In this paper, we design a new flexible smart software-defined radio access network (Soft-RAN) architecture with traffic awareness for sixth generation (6G) wireless networks. In particular, we consider a hierarchical resource allocation model for the proposed smart soft-RAN model where the software-defined network (SDN) controller is the first and foremost layer of the framework. This unit dynamically monitors the network to select a network operation type on the basis of distributed or centralized resource allocation procedures to intelligently perform decision-making. In this paper, our aim is to make the network more scalable and more flexible in terms of conflicting performance indicators such as achievable data rate, overhead, and complexity indicators. To this end, we introduce a new metric, i.e., throughput-overhead-complexity (TOC), for the proposed machine learning-based algorithm, which supports a trade-off between these performance indicators. In particular, the decision making based on TOC is solved via deep reinforcement learning (DRL) which determines an appropriate resource allocation policy. Furthermore, for the selected algorithm, we employ the soft actor-critic (SAC) method which is more accurate, scalable, and robust than other learning methods. Simulation results demonstrate that the proposed smart network achieves better performance in terms of TOC compared to fixed centralized or distributed resource management schemes that lack dynamism. Moreover, our proposed algorithm outperforms conventional learning methods employed in recent state-of-the-art network designs.
Abstract:As the services and requirements of next-generation wireless networks become increasingly diversified, it is estimated that the current frequency bands of mobile network operators (MNOs) will be unable to cope with the immensity of anticipated demands. Due to spectrum scarcity, there has been a growing trend among stakeholders toward identifying practical solutions to make the most productive use of the exclusively allocated bands on a shared basis through spectrum sharing mechanisms. However, due to the technical complexities of these mechanisms, their design presents challenges, as it requires coordination among multiple entities. To address this challenge, in this paper, we begin with a detailed review of the recent literature on spectrum sharing methods, classifying them on the basis of their operational frequency regime that is, whether they are implemented to operate in licensed bands (e.g., licensed shared access (LSA), spectrum access system (SAS), and dynamic spectrum sharing (DSS)) or unlicensed bands (e.g., LTE-unlicensed (LTE-U), licensed assisted access (LAA), MulteFire, and new radio-unlicensed (NR-U)). Then, in order to narrow the gap between the standardization and vendor-specific implementations, we provide a detailed review of the potential implementation scenarios and necessary amendments to legacy cellular networks from the perspective of telecom vendors and regulatory bodies. Next, we analyze applications of artificial intelligence (AI) and machine learning (ML) techniques for facilitating spectrum sharing mechanisms and leveraging the full potential of autonomous sharing scenarios. Finally, we conclude the paper by presenting open research challenges, which aim to provide insights into prospective research endeavors.
Abstract:In this paper, we design a new smart softwaredefined radio access network (RAN) architecture with important properties like flexibility and traffic awareness for sixth generation (6G) wireless networks. In particular, we consider a hierarchical resource allocation framework for the proposed smart soft-RAN model, where the software-defined network (SDN) controller is the first and foremost layer of the framework. This unit dynamically monitors the network to select a network operation type on the basis of distributed or centralized resource allocation architectures to perform decision-making intelligently. In this paper, our aim is to make the network more scalable and more flexible in terms of achievable data rate, overhead, and complexity indicators. To this end, we introduce a new metric, throughput overhead complexity (TOC), for the proposed machine learning-based algorithm, which makes a trade-off between these performance indicators. In particular, the decision making based on TOC is solved via deep reinforcement learning (DRL), which determines an appropriate resource allocation policy. Furthermore, for the selected algorithm, we employ the soft actor-critic method, which is more accurate, scalable, and robust than other learning methods. Simulation results demonstrate that the proposed smart network achieves better performance in terms of TOC compared to fixed centralized or distributed resource management schemes that lack dynamism. Moreover, our proposed algorithm outperforms conventional learning methods employed in other state-of-the-art network designs.
Abstract:Network slicing (NwS) is one of the main technologies in the fifth-generation of mobile communication and beyond (5G+). One of the important challenges in the NwS is information uncertainty which mainly involves demand and channel state information (CSI). Demand uncertainty is divided into three types: number of users requests, amount of bandwidth, and requested virtual network functions workloads. Moreover, the CSI uncertainty is modeled by three methods: worst-case, probabilistic, and hybrid. In this paper, our goal is to maximize the utility of the infrastructure provider by exploiting deep reinforcement learning algorithms in end-to-end NwS resource allocation under demand and CSI uncertainties. The proposed formulation is a nonconvex mixed-integer non-linear programming problem. To perform robust resource allocation in problems that involve uncertainty, we need a history of previous information. To this end, we use a recurrent deterministic policy gradient (RDPG) algorithm, a recurrent and memory-based approach in deep reinforcement learning. Then, we compare the RDPG method in different scenarios with soft actor-critic (SAC), deep deterministic policy gradient (DDPG), distributed, and greedy algorithms. The simulation results show that the SAC method is better than the DDPG, distributed, and greedy methods, respectively. Moreover, the RDPG method out performs the SAC approach on average by 70%.
Abstract:In this paper, we propose a joint radio and core resource allocation framework for NFV-enabled networks. In the proposed system model, the goal is to maximize energy efficiency (EE), by guaranteeing end-to-end (E2E) quality of service (QoS) for different service types. To this end, we formulate an optimization problem in which power and spectrum resources are allocated in the radio part. In the core part, the chaining, placement, and scheduling of functions are performed to ensure the QoS of all users. This joint optimization problem is modeled as a Markov decision process (MDP), considering time-varying characteristics of the available resources and wireless channels. A soft actor-critic deep reinforcement learning (SAC-DRL) algorithm based on the maximum entropy framework is subsequently utilized to solve the above MDP. Numerical results reveal that the proposed joint approach based on the SAC-DRL algorithm could significantly reduce energy consumption compared to the case in which R-RA and NFV-RA problems are optimized separately.