Abstract:Reconfigurable Intelligent Surfaces (RISs) pose as a transformative technology to revolutionize the cellular architecture of Next Generation (NextG) Radio Access Networks (RANs). Previous studies have demonstrated the capabilities of RISs in optimizing wireless propagation, achieving high spectral efficiency, and improving resource utilization. At the same time, the transition to softwarized, disaggregated, and virtualized architectures, such as those being standardized by the O-RAN ALLIANCE, enables the vision of a reconfigurable Open RAN. In this work, we aim to integrate these technologies by studying how different resource allocation policies enhance the performance of RIS-assisted Open RANs. We perform a comparative analysis among various network configurations and show how proper network optimization can enhance the performance across the Enhanced Mobile Broadband (eMBB) and Ultra Reliable and Low Latency Communications (URLLC) network slices, achieving up to ~34% throughput improvement. Furthermore, leveraging the capabilities of OpenRAN Gym, we deploy an xApp on Colosseum, the world's largest wireless system emulator with hardware-in-the-loop, to control the Base Station (BS)'s scheduling policy. Experimental results demonstrate that RIS-assisted topologies achieve high resource efficiency and low latency, regardless of the BS's scheduling policy.
Abstract:Non-terrestrial networks (NTNs) are essential for ubiquitous connectivity, providing coverage in remote and underserved areas. However, since NTNs are currently operated independently, they face challenges such as isolation, limited scalability, and high operational costs. Integrating satellite constellations with terrestrial networks offers a way to address these limitations while enabling adaptive and cost-efficient connectivity through the application of Artificial Intelligence (AI) models. This paper introduces Space-O-RAN, a framework that extends Open Radio Access Network (RAN) principles to NTNs. It employs hierarchical closed-loop control with distributed Space RAN Intelligent Controllers (Space-RICs) to dynamically manage and optimize operations across both domains. To enable adaptive resource allocation and network orchestration, the proposed architecture integrates real-time satellite optimization and control with AI-driven management and digital twin (DT) modeling. It incorporates distributed Space Applications (sApps) and dApps to ensure robust performance in in highly dynamic orbital environments. A core feature is dynamic link-interface mapping, which allows network functions to adapt to specific application requirements and changing link conditions using all physical links on the satellite. Simulation results evaluate its feasibility by analyzing latency constraints across different NTN link types, demonstrating that intra-cluster coordination operates within viable signaling delay bounds, while offloading non-real-time tasks to ground infrastructure enhances scalability toward sixth-generation (6G) networks.
Abstract:5G and beyond cellular systems embrace the disaggregation of Radio Access Network (RAN) components, exemplified by the evolution of the fronthual (FH) connection between cellular baseband and radio unit equipment. Crucially, synchronization over the FH is pivotal for reliable 5G services. In recent years, there has been a push to move these links to an Ethernet-based packet network topology, leveraging existing standards and ongoing research for Time-Sensitive Networking (TSN). However, TSN standards, such as Precision Time Protocol (PTP), focus on performance with little to no concern for security. This increases the exposure of the open FH to security risks. Attacks targeting synchronization mechanisms pose significant threats, potentially disrupting 5G networks and impairing connectivity. In this paper, we demonstrate the impact of successful spoofing and replay attacks against PTP synchronization. We show how a spoofing attack is able to cause a production-ready O-RAN and 5G-compliant private cellular base station to catastrophically fail within 2 seconds of the attack, necessitating manual intervention to restore full network operations. To counter this, we design a Machine Learning (ML)-based monitoring solution capable of detecting various malicious attacks with over 97.5% accuracy.
Abstract:The development of 6G wireless technologies is rapidly advancing, with the 3rd Generation Partnership Project (3GPP) entering the pre-standardization phase and aiming to deliver the first specifications by 2028. This paper explores the OpenAirInterface (OAI) project, an open-source initiative that plays a crucial role in the evolution of 5G and the future 6G networks. OAI provides a comprehensive implementation of 3GPP and O-RAN compliant networks, including Radio Access Network (RAN), Core Network (CN), and software-defined User Equipment (UE) components. The paper details the history and evolution of OAI, its licensing model, and the various projects under its umbrella, such as RAN, the CN, as well as the Operations, Administration and Maintenance (OAM) projects. It also highlights the development methodology, Continuous Integration/Continuous Delivery (CI/CD) processes, and end-to-end systems powered by OAI. Furthermore, the paper discusses the potential of OAI for 6G research, focusing on spectrum, reflective intelligent surfaces, and Artificial Intelligence (AI)/Machine Learning (ML) integration. The open-source approach of OAI is emphasized as essential for tackling the challenges of 6G, fostering community collaboration, and driving innovation in next-generation wireless technologies.
Abstract:Reconfigurable Intelligent Surfaces (RISs) are a promising technique for enhancing the performance of Next Generation (NextG) wireless communication systems in terms of both spectral and energy efficiency, as well as resource utilization. However, current RIS research has primarily focused on theoretical modeling and Physical (PHY) layer considerations only. Full protocol stack emulation and accurate modeling of the propagation characteristics of the wireless channel are necessary for studying the benefits introduced by RIS technology across various spectrum bands and use-cases. In this paper, we propose, for the first time: (i) accurate PHY layer RIS-enabled channel modeling through Geometry-Based Stochastic Models (GBSMs), leveraging the QUAsi Deterministic RadIo channel GenerAtor (QuaDRiGa) open-source statistical ray-tracer; (ii) optimized resource allocation with RISs by comprehensively studying energy efficiency and power control on different portions of the spectrum through a single-leader multiple-followers Stackelberg game theoretical approach; (iii) full-stack emulation and performance evaluation of RIS-assisted channels with SCOPE/srsRAN for Enhanced Mobile Broadband (eMBB) and Ultra Reliable and Low Latency Communications (URLLC) applications in the worlds largest emulator of wireless systems with hardware-in-the-loop, namely Colosseum. Our findings indicate (i) the significant power savings in terms of energy efficiency achieved with RIS-assisted topologies, especially in the millimeter wave (mmWave) band; and (ii) the benefits introduced for Sub-6 GHz band User Equipments (UEs), where the deployment of a relatively small RIS (e.g., in the order of 100 RIS elements) can result in decreased levels of latency for URLLC services in resource-constrained environments.
Abstract:Accurate channel modeling in real-time faces remarkable challenge due to the complexities of traditional methods such as ray tracing and field measurements. AI-based techniques have emerged to address these limitations, offering rapid, precise predictions of channel properties through ground truth data. This paper introduces an innovative approach to real-time, high-fidelity propagation modeling through advanced deep learning. Our model integrates 3D geographical data and rough propagation estimates to generate precise path gain predictions. By positioning the transmitter centrally, we simplify the model and enhance its computational efficiency, making it amenable to larger scenarios. Our approach achieves a normalized Root Mean Squared Error of less than 0.035 dB over a 37,210 square meter area, processing in just 46 ms on a GPU and 183 ms on a CPU. This performance significantly surpasses traditional high-fidelity ray tracing methods, which require approximately three orders of magnitude more time. Additionally, the model's adaptability to real-world data highlights its potential to revolutionize wireless network design and optimization, through enabling real-time creation of adaptive digital twins of real-world wireless scenarios in dynamic environments.
Abstract:This paper introduces an innovative framework designed for progressive (granular in time to onset) prediction of seizures through the utilization of a Deep Learning (DL) methodology based on non-invasive multi-modal sensor networks. Epilepsy, a debilitating neurological condition, affects an estimated 65 million individuals globally, with a substantial proportion facing drug-resistant epilepsy despite pharmacological interventions. To address this challenge, we advocate for predictive systems that provide timely alerts to individuals at risk, enabling them to take precautionary actions. Our framework employs advanced DL techniques and uses personalized data from a network of non-invasive electroencephalogram (EEG) and electrocardiogram (ECG) sensors, thereby enhancing prediction accuracy. The algorithms are optimized for real-time processing on edge devices, mitigating privacy concerns and minimizing data transmission overhead inherent in cloud-based solutions, ultimately preserving battery energy. Additionally, our system predicts the countdown time to seizures (with 15-minute intervals up to an hour prior to the onset), offering critical lead time for preventive actions. Our multi-modal model achieves 95% sensitivity, 98% specificity, and 97% accuracy, averaged among 29 patients.
Abstract:In the context of fifth-generation new radio (5G NR) technology, it is not possible to directly obtain an absolute uplink (UL) channel impulse response (CIR) at the base station (gNB) from a user equipment (UE). The UL CIR obtained through the sounding reference signal (SRS) is always time-shifted by the timing advance (TA) applied at the UE. The TA is crucial for maintaining UL synchronization, and transmitting SRS without applying the TA will result in interference. In this work, we propose a new method to obtain absolute UL CIR from a UE and then use it to estimate the round trip time (RTT) at the gNB. This method requires enhancing the current 5G protocol stack with a new Zadoff-Chu (ZC) based wideband uplink reference signal (URS). Capitalizing on the cyclic shift property of the URS sequence, we can obtain the RTT with a significant reduction in overhead and latency compared to existing schemes. The proposed method is experimentally validated using a real-world testbed based on OpenAirInterface (OAI).
Abstract:This demo paper presents a dApp-based real-time spectrum sharing scenario where a 5th generation (5G) base station implementing the NR stack adapts its transmission and reception strategies based on the incumbent priority users in the Citizen Broadband Radio Service (CBRS) band. The dApp is responsible for obtaining relevant measurements from the Next Generation Node Base (gNB), running the spectrum sensing inference, and configuring the gNB with a control action upon detecting the primary incumbent user transmissions. This approach is built on dApps, which extend the O-RAN framework to the real-time and user plane domains. Thus, it avoids the need of dedicated Spectrum Access Systems (SASs) in the CBRS band. The demonstration setup is based on the open-source 5G OpenAirInterface (OAI) framework, where we have implemented a dApp interfaced with a gNB and communicating with a Commercial Off-the-Shelf (COTS) User Equipment (UE) in an over-the-air wireless environment. When an incumbent user has active transmission, the dApp will detect and inform the primary user presence to the gNB. The dApps will also enforce a control policy that adapts the scheduling and transmission policy of the Radio Access Network (RAN). This demo provides valuable insights into the potential of using dApp-based spectrum sensing with O-RAN architecture in next generation cellular networks.
Abstract:Digital twins are now a staple of wireless networks design and evolution. Creating an accurate digital copy of a real system offers numerous opportunities to study and analyze its performance and issues. It also allows designing and testing new solutions in a risk-free environment, and applying them back to the real system after validation. A candidate technology that will heavily rely on digital twins for design and deployment is 6G, which promises robust and ubiquitous networks for eXtended Reality (XR) and immersive communications solutions. In this paper, we present BostonTwin, a dataset that merges a high-fidelity 3D model of the city of Boston, MA, with the existing geospatial data on cellular base stations deployments, in a ray-tracing-ready format. Thus, BostonTwin enables not only the instantaneous rendering and programmatic access to the building models, but it also allows for an accurate representation of the electromagnetic propagation environment in the real-world city of Boston. The level of detail and accuracy of this characterization is crucial to designing 6G networks that can support the strict requirements of sensitive and high-bandwidth applications, such as XR and immersive communication.