Abstract:Effective channel estimation in sparse and high-dimensional environments is essential for next-generation wireless systems, particularly in large-scale MIMO deployments. This paper introduces a novel framework that leverages digital twins (DTs) as priors to enable efficient zone-specific subspace-based channel estimation (CE). Subspace-based CE significantly reduces feedback overhead by focusing on the dominant channel components, exploiting sparsity in the angular domain while preserving estimation accuracy. While DT channels may exhibit inaccuracies, their coarse-grained subspaces provide a powerful starting point, reducing the search space and accelerating convergence. The framework employs a two-step clustering process on the Grassmann manifold, combined with reinforcement learning (RL), to iteratively calibrate subspaces and align them with real-world counterparts. Simulations show that digital twins not only enable near-optimal performance but also enhance the accuracy of subspace calibration through RL, highlighting their potential as a step towards learnable digital twins.
Abstract:This paper explores a novel research direction where a digital twin is leveraged to assist the beamforming design for an integrated sensing and communication (ISAC) system. In this setup, a base station designs joint communication and sensing beamforming to serve the communication user and detect the sensing target concurrently. Utilizing the electromagnetic (EM) 3D model of the environment and ray tracing, the digital twin can provide various information, e.g., propagation path parameters and wireless channels, to aid communication and sensing systems. More specifically, our digital twin-based beamforming design first leverages the environment EM 3D model and ray tracing to (i) predict the directions of the line-of-sight (LoS) and non-line-of-sight (NLoS) sensing channel paths and (ii) identify the dominant one among these sensing channel paths. Then, to optimize the joint sensing and communication beam, we maximize the sensing signal-to-noise ratio (SNR) on the dominant sensing channel component while satisfying a minimum communication signal-to-interference-plus-noise ratio (SINR) requirement. Simulation results show that the proposed digital twin-assisted beamforming design achieves near-optimal target sensing SNR in both LoS and NLoS dominant areas, while ensuring the required SINR for the communication user. This highlights the potential of leveraging digital twins to assist ISAC systems.
Abstract:This paper introduces a task-specific, model-agnostic framework for evaluating dataset similarity, providing a means to assess and compare dataset realism and quality. Such a framework is crucial for augmenting real-world data, improving benchmarking, and making informed retraining decisions when adapting to new deployment settings, such as different sites or frequency bands. The proposed framework is employed to design metrics based on UMAP topology-preserving dimensionality reduction, leveraging Wasserstein and Euclidean distances on latent space KNN clusters. The designed metrics show correlations above 0.85 between dataset distances and model performances on a channel state information compression unsupervised machine learning task leveraging autoencoder architectures. The results show that the designed metrics outperform traditional methods.
Abstract:In the advent of next-generation wireless communication, millimeter-wave (mmWave) and terahertz (THz) technologies are pivotal for their high data rate capabilities. However, their reliance on large antenna arrays and narrow directive beams for ensuring adequate receive signal power introduces significant beam training overheads. This becomes particularly challenging in supporting highly-mobile applications such as drone communication, where the dynamic nature of drones demands frequent beam alignment to maintain connectivity. Addressing this critical bottleneck, our paper introduces a novel machine learning-based framework that leverages multi-modal sensory data, including visual and positional information, to expedite and refine mmWave/THz beam prediction. Unlike conventional approaches that solely depend on exhaustive beam training methods, our solution incorporates additional layers of contextual data to accurately predict beam directions, significantly mitigating the training overhead. Additionally, our framework is capable of predicting future beam alignments ahead of time. This feature enhances the system's responsiveness and reliability by addressing the challenges posed by the drones' mobility and the computational delays encountered in real-time processing. This capability for advanced beam tracking asserts a critical advancement in maintaining seamless connectivity for highly-mobile drones. We validate our approach through comprehensive evaluations on a unique, real-world mmWave drone communication dataset, which integrates concurrent camera visuals, practical GPS coordinates, and mmWave beam training data...
Abstract:This paper presents the Large Wireless Model (LWM) -- the world's first foundation model for wireless channels. Designed as a task-agnostic model, LWM generates universal, rich, contextualized channel embeddings (features) that potentially enhance performance across a wide range of downstream tasks in wireless communication and sensing systems. Towards this objective, LWM, which has a transformer-based architecture, was pre-trained in a self-supervised manner on large-scale wireless channel datasets. Our results show consistent improvements in classification and regression tasks when using the LWM embeddings compared to raw channel representations, especially in scenarios with high-complexity machine learning tasks and limited training datasets. This LWM's ability to learn from large-scale wireless data opens a promising direction for intelligent systems that can efficiently adapt to diverse tasks with limited data, paving the way for addressing key challenges in wireless communication and sensing systems.
Abstract:Leveraging perception from radar data can assist multiple communication tasks, especially in highly-mobile and large-scale MIMO systems. One particular challenge, however, is how to distinguish the communication user (object) from the other mobile objects in the sensing scene. This paper formulates this \textit{user identification} problem and develops two solutions, a baseline model-based solution that maps the objects angles from the radar scene to communication beams and a scalable deep learning solution that is agnostic to the number of candidate objects. Using the DeepSense 6G dataset, which have real-world measurements, the developed deep learning approach achieves more than $93.4\%$ communication user identification accuracy, highlighting a promising path for enabling integrated radar-communication applications in the real world.
Abstract:In this document, we revise the results of [1] based on more reasonable assumptions regarding data shuffling and parameter setup of deep neural networks (DNNs). Thus, the simulation results can now more reasonably demonstrate the performance of both the proposed and compared beam alignment methods. We revise the simulation steps and make moderate modifications to the design of the vehicle distribution feature (VDF) for the proposed vision based beam alignment when the MS location is available (VBALA). Specifically, we replace the 2D grids of the VDF with 3D grids and utilize the vehicle locations to expand the dimensions of the VDF. Then, we revise the simulation results of Fig. 11, Fig. 12, Fig. 13, Fig. 14, and Fig. 15 in [1] to reaffirm the validity of the conclusions.
Abstract:Beamforming design is critical for the efficient operation of integrated sensing and communication (ISAC) MIMO systems. ISAC beamforming design in cell-free massive MIMO systems, compared to colocated MIMO systems, is more challenging due to the additional complexity of the distributed large number of access points (APs). To address this problem, this paper first shows that graph neural networks (GNNs) are a suitable machine learning framework. Then, it develops a novel heterogeneous GNN model inspired by the specific characteristics of the cell-free ISAC MIMO systems. This model enables the low-complexity scaling of the cell-free ISAC system and does not require full retraining when additional APs are added or removed. Our results show that the proposed architecture can achieve near-optimal performance, and applies well to various network structures.
Abstract:Fully harvesting the gain of multiple-input and multiple-output (MIMO) requires accurate channel information. However, conventional channel acquisition methods mainly rely on pilot training signals, resulting in significant training overheads (time, energy, spectrum). Digital twin-aided communications have been proposed in [1] to reduce or eliminate this overhead by approximating the real world with a digital replica. However, how to implement a digital twin-aided communication system brings new challenges. In particular, how to model the 3D environment and the associated EM properties, as well as how to update the environment dynamics in a coherent manner. To address these challenges, motivated by the latest advancements in computer vision, 3D reconstruction and neural radiance field, we propose an end-to-end deep learning framework for future generation wireless systems that can reconstruct the 3D EM field covered by a wireless access point, based on widely available crowd-sourced world-locked wireless samples between the access point and the devices. This visionary framework is grounded in classical EM theory and employs deep learning models to learn the EM properties and interaction behaviors of the objects in the environment. Simulation results demonstrate that the proposed learnable digital twin can implicitly learn the EM properties of the objects, accurately predict wireless channels, and generalize to changes in the environment, highlighting the prospect of this novel direction for future generation wireless platforms.
Abstract:Accurate localization is crucial for various applications, including autonomous vehicles and next-generation wireless networks. However, the reliability and precision of Global Navigation Satellite Systems (GNSS), such as the Global Positioning System (GPS), are compromised by multi-path errors and non-line-of-sight scenarios. This paper presents a novel approach to enhance GPS accuracy by combining visual data from RGB cameras with wireless signals captured at millimeter-wave (mmWave) and sub-terahertz (sub-THz) basestations. We propose a sensing-aided framework for (i) site-specific GPS data characterization and (ii) GPS position de-noising that utilizes multi-modal visual and wireless information. Our approach is validated in a realistic Vehicle-to-Infrastructure (V2I) scenario using a comprehensive real-world dataset, demonstrating a substantial reduction in localization error to sub-meter levels. This method represents a significant advancement in achieving precise localization, particularly beneficial for high-mobility applications in 5G and beyond networks.