Abstract:The massive machine-type communications (mMTC) service will be part of new services planned to integrate the fifth generation of wireless communication (B5G). In mMTC, thousands of devices sporadically access available resource blocks on the network. In this scenario, the massive random access (RA) problem arises when two or more devices collide when selecting the same resource block. There are several techniques to deal with this problem. One of them deploys $Q$-learning (QL), in which devices store in their $Q$-table the rewards sent by the central node that indicate the quality of the transmission performed. The device learns the best resource blocks to select and transmit to avoid collisions. We propose a multi-power level QL (MPL-QL) algorithm that uses non-orthogonal multiple access (NOMA) transmit scheme to generate transmission power diversity and allow {accommodate} more than one device in the same time-slot as long as the signal-to-interference-plus-noise ratio (SINR) exceeds a threshold value. The numerical results reveal that the best performance-complexity trade-off is obtained by using a {higher {number of} power levels, typically eight levels}. The proposed MPL-QL {can deliver} better throughput and lower latency compared to other recent QL-based algorithms found in the literature
Abstract:In mMTC mode, with thousands of devices trying to access network resources sporadically, the problem of random access (RA) and collisions between devices that select the same resources becomes crucial. A promising approach to solve such an RA problem is to use learning mechanisms, especially the Q-learning algorithm, where the devices learn about the best time-slot periods to transmit through rewards sent by the central node. In this work, we propose a distributed packet-based learning method by varying the reward from the central node that favors devices having a larger number of remaining packets to transmit. Our numerical results indicated that the proposed distributed packet-based Q-learning method attains a much better throughput-latency trade-off than the alternative independent and collaborative techniques in practical scenarios of interest. In contrast, the number of payload bits of the packet-based technique is reduced regarding the collaborative Q-learning RA technique for achieving the same normalized throughput.