Abstract:Accurate RF propagation modeling in urban environments is critical for developing digital spectrum twins and optimizing wireless communication systems. We introduce OpenGERT, an open-source automated Geometry Extraction tool for Ray Tracing, which collects and processes terrain and building data from OpenStreetMap, Microsoft Global ML Building Footprints, and USGS elevation data. Using the Blender Python API, it creates detailed urban models for high-fidelity simulations with NVIDIA Sionna RT. We perform sensitivity analyses to examine how variations in building height, position, and electromagnetic material properties affect ray-tracing accuracy. Specifically, we present pairwise dispersion plots of channel statistics (path gain, mean excess delay, delay spread, link outage, and Rician K-factor) and investigate how their sensitivities change with distance from transmitters. We also visualize the variance of these statistics for selected transmitter locations to gain deeper insights. Our study covers Munich and Etoile scenes, each with 10 transmitter locations. For each location, we apply five types of perturbations: material, position, height, height-position, and all combined, with 50 perturbations each. Results show that small changes in permittivity and conductivity minimally affect channel statistics, whereas variations in building height and position significantly alter all statistics, even with noise standard deviations of 1 meter in height and 0.4 meters in position. These findings highlight the importance of precise environmental modeling for accurate propagation predictions, essential for digital spectrum twins and advanced communication networks. The code for geometry extraction and sensitivity analyses is available at github.com/serhatadik/OpenGERT/.
Abstract:In this paper, we consider the importance of channel measurement data from specific sites and its impact on air interface optimization and test. Currently, a range of statistical channel models including 3GPP 38.901 tapped delay line (TDL), clustered delay line (CDL), urban microcells (UMi) and urban macrocells (UMa) type channels are widely used for air interface performance testing and simulation. However, there remains a gap in the realism of these models for air interface testing and optimization when compared with real world measurement based channels. To address this gap, we compare the performance impacts of training neural receivers with 1) statistical 3GPP TDL models, and 2) measured macro-cell channel impulse response (CIR) data. We leverage our OmniPHY-5G neural receiver for NR PUSCH uplink simulation, with a training procedure that uses statistical TDL channel models for pre-training, and fine-tuning based on measured site specific MIMO CIR data. The proposed fine-tuning method achieves a 10% block error rate (BLER) at a 1.85 dB lower signal-to-noise ratio (SNR) compared to pre-training only on simulated TDL channels, illustrating a rough magnitude of the gap that can be closed by site-specific training, and gives the first answer to the question "how much can fine-tuning the RAN for site-specific channels help?"