Abstract:Future wireless networks aim to deliver high data rates and lower power consumption while ensuring seamless connectivity, necessitating robust optimization. Large language models (LLMs) have been deployed for generalized optimization scenarios. To take advantage of generative AI (GAI) models, we propose retrieval augmented generation (RAG) for multi-sensor wireless environment perception. Utilizing domain-specific prompt engineering, we apply RAG to efficiently harness multimodal data inputs from sensors in a wireless environment. Key pre-processing pipelines including image-to-text conversion, object detection, and distance calculations for multimodal RAG input from multi-sensor data are proposed to obtain a unified vector database crucial for optimizing LLMs in global wireless tasks. Our evaluation, conducted with OpenAI's GPT and Google's Gemini models, demonstrates an 8%, 8%, 10%, 7%, and 12% improvement in relevancy, faithfulness, completeness, similarity, and accuracy, respectively, compared to conventional LLM-based designs. Furthermore, our RAG-based LLM framework with vectorized databases is computationally efficient, providing real-time convergence under latency constraints.
Abstract:Integrating non-terrestrial networks (NTNs) with terrestrial networks (TNs) is key to enhancing coverage, capacity, and reliability in future wireless communications. However, the multi-tier, heterogeneous architecture of these integrated TN-NTNs introduces complex challenges in spectrum sharing and interference management. Conventional optimization approaches struggle to handle the high-dimensional decision space and dynamic nature of these networks. This paper proposes a novel hierarchical deep reinforcement learning (HDRL) framework to address these challenges and enable intelligent spectrum sharing. The proposed framework leverages the inherent hierarchy of the network, with separate policies for each tier, to learn and optimize spectrum allocation decisions at different timescales and levels of abstraction. By decomposing the complex spectrum sharing problem into manageable sub-tasks and allowing for efficient coordination among the tiers, the HDRL approach offers a scalable and adaptive solution for spectrum management in future TN-NTNs. Simulation results demonstrate the superior performance of the proposed framework compared to traditional approaches, highlighting its potential to enhance spectral efficiency and network capacity in dynamic, multi-tier environments.
Abstract:The rapid growth of computation-intensive applications like augmented reality, autonomous driving, remote healthcare, and smart cities has exposed the limitations of traditional terrestrial networks, particularly in terms of inadequate coverage, limited capacity, and high latency in remote areas. This chapter explores how integrated terrestrial and non-terrestrial networks (IT-NTNs) can address these challenges and enable efficient computation offloading. We examine mobile edge computing (MEC) and its evolution toward multiple-access edge computing, highlighting the critical role computation offloading plays for resource-constrained devices. We then discuss the architecture of IT-NTNs, focusing on how terrestrial base stations, unmanned aerial vehicles (UAVs), high-altitude platforms (HAPs), and LEO satellites work together to deliver ubiquitous connectivity. Furthermore, we analyze various computation offloading strategies, including edge, cloud, and hybrid offloading, outlining their strengths and weaknesses. Key enabling technologies such as NOMA, mmWave/THz communication, and reconfigurable intelligent surfaces (RIS) are also explored as essential components of existing algorithms for resource allocation, task offloading decisions, and mobility management. Finally, we conclude by highlighting the transformative impact of computation offloading in IT-NTNs across diverse application areas and discuss key challenges and future research directions, emphasizing the potential of these networks to revolutionize communication and computation paradigms.
Abstract:Currently used resource allocation methods for uplink multicarrier non-orthogonal multiple access (MC-NOMA) systems have multiple shortcomings. Current approaches either allocate the same power across all subcarriers to a user, or use heuristic-based near-far, strong channel-weak channel user grouping to assign the decoding order for successive interference cancellation (SIC). This paper proposes a novel optimal power-subcarrier allocation for uplink MC-NOMA. This new allocation achieves the optimal power-subcarrier allocation as well as the optimal SIC decoding order. Furthermore, the proposed method includes a time-sharing algorithm that dynamically alters the decoding orders of the participating users to achieve the required data rates, even in cases where any single decoding order fails to do so. Extensive experimental evaluations show that the new method achieves higher sum data rates and lower power consumption compared to current NOMA methods.
Abstract:This paper proposes a novel joint channel-estimation and source-detection algorithm using successive interference cancellation (SIC)-aided generative score-based diffusion models. Prior work in this area focuses on massive MIMO scenarios, which are typically characterized by full-rank channels, and fail in low-rank channel scenarios. The proposed algorithm outperforms existing methods in joint source-channel estimation, especially in low-rank scenarios where the number of users exceeds the number of antennas at the access point (AP). The proposed score-based iterative diffusion process estimates the gradient of the prior distribution on partial channels, and recursively updates the estimated channel parts as well as the source. Extensive simulation results show that the proposed method outperforms the baseline methods in terms of normalized mean squared error (NMSE) and symbol error rate (SER) in both full-rank and low-rank channel scenarios, while having a more dominant effect in the latter, at various signal-to-noise ratios (SNR).
Abstract:Upcoming Augmented Reality (AR) and Virtual Reality (VR) systems require high data rates ($\geq$ 500 Mbps) and low power consumption for seamless experience. With an increasing number of subscribing users, the total number of antennas across all transmitting users far exceeds the number of antennas at the access point (AP). This results in a low rank wireless channel, presenting a bottleneck for uplink communication systems. The current uplink systems that use orthogonal multiple access (OMA) and the proposed non-orthogonal multiple access (NOMA), fail to achieve the required data rates / power consumption under predominantly low rank channel scenarios. This paper introduces an optimal power sub carrier allocation algorithm for multi-carrier NOMA, named minPMAC, and an associated time-sharing algorithm that adaptively changes successive interference cancellation decoding orders to maximize sum data rates in these low rank channels. This Lagrangian based optimization technique, although globally optimum, is prohibitive in terms of runtime, proving inefficient for real-time scenarios. Hence, we propose a novel near-optimal deep reinforcement learning-based energy sum optimization (DRL-minPMAC) which achieves real-time efficiency. Extensive experimental evaluations show that minPMAC achieves 28\% and 39\% higher data rates than NOMA and OMA baselines. Furthermore, the proposed DRL-minPMAC runs ~5 times faster than minPMAC and achieves 83\% of the global optimum data rates in real time
Abstract:Efficient spectrum allocation has become crucial as the surge in wireless-connected devices demands seamless support for more users and applications, a trend expected to grow with 6G. Innovations in satellite technologies such as SpaceX's Starlink have enabled non-terrestrial networks (NTNs) to work alongside terrestrial networks (TNs) and allocate spectrum based on regional demands. Existing spectrum sharing approaches in TNs use machine learning for interference minimization through power allocation and spectrum sensing, but the unique characteristics of NTNs like varying orbital dynamics and coverage patterns require more sophisticated coordination mechanisms. The proposed work uses a hierarchical deep reinforcement learning (HDRL) approach for efficient spectrum allocation across TN-NTN networks. DRL agents are present at each TN-NTN hierarchy that dynamically learn and allocate spectrum based on regional trends. This framework is 50x faster than the exhaustive search algorithm while achieving 95\% of optimum spectral efficiency. Moreover, it is 3.75x faster than multi-agent DRL, which is commonly used for spectrum sharing, and has a 12\% higher overall average throughput.
Abstract:The advent of Non-Terrestrial Networks (NTN) represents a compelling response to the International Mobile Telecommunications 2030 (IMT-2030) framework, enabling the delivery of advanced, seamless connectivity that supports reliable, sustainable, and resilient communication systems. Nevertheless, the integration of NTN with Terrestrial Networks (TN) necessitates considerable alterations to the existing cellular infrastructure in order to address the challenges intrinsic to NTN implementation. Additionally, Ambient Backscatter Communication (AmBC), which utilizes ambient Radio Frequency (RF) signals to transmit data to the intended recipient by altering and reflecting these signals, exhibits considerable potential for the effective integration of NTN and TN. Furthermore, AmBC is constrained by its limitations regarding power, interference, and other related factors. In contrast, the application of Artificial Intelligence (AI) within wireless networks demonstrates significant potential for predictive analytics through the use of extensive datasets. AI techniques enable the real-time optimization of network parameters, mitigating interference and power limitations in AmBC. These predictive models also enhance the adaptive integration of NTN and TN, driving significant improvements in network reliability and Energy Efficiency (EE). In this paper, we present a comprehensive examination of how the commixture of AI, AmBC, and NTN can facilitate the integration of NTN and TN. We also provide a thorough analysis indicating a marked enhancement in EE predicated on this triadic relationship.
Abstract:This work explores the deployment of active reconfigurable intelligent surfaces (A-RIS) in integrated terrestrial and non-terrestrial networks (TN-NTN) while utilizing coordinated multipoint non-orthogonal multiple access (CoMP-NOMA). Our system model incorporates a UAV-assisted RIS in coordination with a terrestrial RIS which aims for signal enhancement. We aim to maximize the sum rate for all users in the network using a custom hybrid proximal policy optimization (H-PPO) algorithm by optimizing the UAV trajectory, base station (BS) power allocation factors, active RIS amplification factor, and phase shift matrix. We integrate edge users into NOMA pairs to achieve diversity gain, further enhancing the overall experience for edge users. Exhaustive comparisons are made with passive RIS-assisted networks to demonstrate the superior efficacy of active RIS in terms of energy efficiency, outage probability, and network sum rate.
Abstract:Reconfigurable intelligent surface (RIS)-assisted aerial non-terrestrial networks (NTNs) offer a promising paradigm for enhancing wireless communications in the era of 6G and beyond. By integrating RIS with aerial platforms such as unmanned aerial vehicles (UAVs) and high-altitude platforms (HAPs), these networks can intelligently control signal propagation, extending coverage, improving capacity, and enhancing link reliability. This article explores the application of deep reinforcement learning (DRL) as a powerful tool for optimizing RIS-assisted aerial NTNs. We focus on hybrid proximal policy optimization (H-PPO), a robust DRL algorithm well-suited for handling the complex, hybrid action spaces inherent in these networks. Through a case study of an aerial RIS (ARIS)-aided coordinated multi-point non-orthogonal multiple access (CoMP-NOMA) network, we demonstrate how H-PPO can effectively optimize the system and maximize the sum rate while adhering to system constraints. Finally, we discuss key challenges and promising research directions for DRL-powered RIS-assisted aerial NTNs, highlighting their potential to transform next-generation wireless networks.