Abstract:Non-orthogonal multiple access (NOMA) enabled fog radio access networks (NOMA-F-RANs) have been taken as a promising enabler to release network congestion, reduce delivery latency, and improve fog user equipments' (F-UEs') quality of services (QoS). Nevertheless, the effectiveness of NOMA-F-RANs highly relies on the charted feature information (preference distribution, positions, mobilities, etc.) of F-UEs as well as the effective caching, computing, and resource allocation strategies. In this article, we explore how artificial intelligence (AI) techniques are utilized to solve foregoing tremendous challenges. Specifically, we first elaborate on the NOMA-F-RANs architecture, shedding light on the key modules, namely, cooperative caching and cache-aided mobile edge computing (MEC). Then, the potentially applicable AI-driven techniques in solving the principal issues of NOMA-F-RANs are reviewed. Through case studies, we show the efficacy of AI-enabled methods in terms of F-UEs' latent feature extraction and cooperative caching. Finally, future trends of AI-driven NOMA-F-RANs, including open research issues and challenges, are identified.
Abstract:Driven by the unprecedented high throughput and low latency requirements in next-generation wireless networks, this paper introduces an artificial intelligence (AI) enabled framework in which unmanned aerial vehicles (UAVs) use non-orthogonal multiple access (NOMA) and mobile edge computing (MEC) techniques to service terrestrial mobile users (MUs). The proposed framework enables the terrestrial MUs to offload their computational tasks simultaneously, intelligently, and flexibly, thus enhancing their connectivity as well as reducing their transmission latency and their energy consumption. To this end, the fundamentals of this framework are first introduced. Then, a number of communication and AI techniques are proposed to improve the quality of experiences of terrestrial MUs. To this end, federated learning and reinforcement learning are introduced for intelligent task offloading and computing resource allocation. For each learning technique, motivations, challenges, and representative results are introduced. Finally, several key technical challenges and open research issues of the proposed framework are summarized.
Abstract:A novel Twitter context aided content caching (TAC) framework is proposed for enhancing the caching efficiency by taking advantage of the legibility and massive volume of Twitter data. For the purpose of promoting the caching efficiency, three machine learning models are proposed to predict latent events and events popularity, utilizing collect Twitter data with geo-tags and geographic information of the adjacent base stations (BSs). Firstly, we propose a latent Dirichlet allocation (LDA) model for latent events forecasting taking advantage of the superiority of the LDA model in natural language processing (NLP). Then, we conceive long short-term memory (LSTM) with skip-gram embedding approach and LSTM with continuous skip-gram-Geo-aware embedding approach for the events popularity forecasting. Lastly, we associate the predicted latent events and the popularity of the events with the caching strategy. Extensive practical experiments demonstrate that: (1) The proposed TAC framework outperforms the conventional caching framework and is capable of being employed in practical applications thanks to the associating ability with public interests. (2) The proposed LDA approach conserves superiority for natural language processing (NLP) in Twitter data. (3) The perplexity of the proposed skip-gram-based LSTM is lower compared with the conventional LDA approach. (4) Evaluation of the model demonstrates that the hit rates of tweets of the model vary from 50% to 65% and the hit rate of the caching contents is up to approximately 75\% with smaller caching space compared to conventional algorithms.