Louis
Abstract:Distributed multiple-input multiple-output (D\mbox{-}MIMO) is a promising technology to realize the promise of massive MIMO gains by fiber-connecting the distributed antenna arrays, thereby overcoming the form factor limitations of co-located MIMO. In this paper, we introduce the concept of mobile D-MIMO (MD-MIMO) network, a further extension of the D-MIMO technology where distributed antenna arrays are connected to the base station with a wireless link allowing all radio network nodes to be mobile. This approach significantly improves deployment flexibility and reduces operating costs, enabling the network to adapt to the highly dynamic nature of next-generation (NextG) networks. We discuss use cases, system design, network architecture, and the key enabling technologies for MD-MIMO. Furthermore, we investigate a case study of MD-MIMO for vehicular networks, presenting detailed performance evaluations for both downlink and uplink. The results show that an MD-MIMO network can provide substantial improvements in network throughput and reliability.
Abstract:This paper investigates the spectral efficiency achieved through uplink joint transmission, where a serving user and the network users (UEs) collaborate by jointly transmitting to the base station (BS). The analysis incorporates the resource requirements for information sharing among UEs as a critical factor in the capacity evaluation. Furthermore, coherent and non-coherent joint transmission schemes are compared under various transmission power scenarios, providing insights into spectral and energy efficiency. A selection algorithm identifying the optimal UEs for joint transmission, achieving maximum capacity, is discussed. The results indicate that uplink joint transmission is one of the promising techniques for enabling 6G, achieving greater spectral efficiency even when accounting for the resource requirements for information sharing.
Abstract:This paper investigates the significance of designing a reliable, intelligent, and true physical environment-aware precoding scheme by leveraging an accurately designed channel twin model to obtain realistic channel state information (CSI) for cellular communication systems. Specifically, we propose a fine-tuned multi-step channel twin design process that can render CSI very close to the CSI of the actual environment. After generating a precise CSI, we execute precoding using the obtained CSI at the transmitter end. We demonstrate a two-step parameters' tuning approach to design channel twin by ray tracing (RT) emulation, then further fine-tuning of CSI by employing an artificial intelligence (AI) based algorithm can significantly reduce the gap between actual CSI and the fine-tuned digital twin (DT) rendered CSI. The simulation results show the effectiveness of the proposed novel approach in designing a true physical environment-aware channel twin model.
Abstract:In the physical layer (PHY) of modern cellular systems, information is transmitted as a sequence of resource blocks (RBs) across various domains with each resource block limited to a certain time and frequency duration. In the PHY of 4G/5G systems, data is transmitted in the unit of transport block (TB) across a fixed number of physical RBs based on resource allocation decisions. Using sharp band-limiting in the frequency domain can provide good separation between different resource allocations without wasting resources in guard bands. However, using sharp filters comes at the cost of elongating the overall system impulse response which can accentuate inter-symbol interference (ISI). In a multi-user setup, such as in Machine Type Communication (MTC), different users are allocated resources across time and frequency, and operate at different power levels. If strict band-limiting separation is used, high power user signals can leak in time into low power user allocations. The ISI extent, i.e., the number of neighboring symbols that contribute to the interference, depends both on the channel delay spread and the spectral concentration properties of the signaling waveforms. We hypothesize that using a precoder that effectively transforms an OFDM waveform basis into a basis comprised of prolate spheroidal sequences (DPSS) can minimize the ISI extent when strictly confined frequency allocations are used. Analytical expressions for upper bounds on ISI are derived. In addition, simulation results support our hypothesis.
Abstract:Deep learning is making a profound impact in the physical layer of wireless communications. Despite exhibiting outstanding empirical performance in tasks such as MIMO receive processing, the reasons behind the demonstrated superior performance improvement remain largely unclear. In this work, we advance the field of Explainable AI (xAI) in the physical layer of wireless communications utilizing signal processing principles. Specifically, we focus on the task of MIMO-OFDM receive processing (e.g., symbol detection) using reservoir computing (RC), a framework within recurrent neural networks (RNNs), which outperforms both conventional and other learning-based MIMO detectors. Our analysis provides a signal processing-based, first-principles understanding of the corresponding operation of the RC. Building on this fundamental understanding, we are able to systematically incorporate the domain knowledge of wireless systems (e.g., channel statistics) into the design of the underlying RNN by directly configuring the untrained RNN weights for MIMO-OFDM symbol detection. The introduced RNN weight configuration has been validated through extensive simulations demonstrating significant performance improvements. This establishes a foundation for explainable RC-based architectures in MIMO-OFDM receive processing and provides a roadmap for incorporating domain knowledge into the design of neural networks for NextG systems.
Abstract:In the physical layer (PHY) of modern cellular systems, information is transmitted as a sequence of resource blocks (RBs) across various domains with each resource block limited to a certain time and frequency duration. In the PHY of 4G/5G systems, data is transmitted in the unit of transport block (TB) across a fixed number of physical RBs based on resource allocation decisions. This simultaneous time and frequency localized structure of resource allocation is at odds with the perennial time-frequency compactness limits. Specifically, the band-limiting operation will disrupt the time localization and lead to inter-block interference (IBI). The IBI extent, i.e., the number of neighboring blocks that contribute to the interference, depends mainly on the spectral concentration properties of the signaling waveforms. Deviating from the standard Gabor-frame based multi-carrier approaches which use time-frequency shifted versions of a single prototype pulse, the use of a set of multiple mutually orthogonal pulse shapes-that are not related by a time-frequency shift relationship-is proposed. We hypothesize that using discrete prolate spheroidal sequences (DPSS) as the set of waveform pulse shapes reduces IBI. Analytical expressions for upper bounds on IBI are derived as well as simulation results provided that support our hypothesis.
Abstract:The paper proposes a new architecture for Distributed MIMO (D-MIMO) in which the base station (BS) jointly transmits with wireless mobile nodes to serve users (UEs) within a cell for 6G communication systems. The novelty of the architecture lies in the wireless mobile nodes participating in joint D-MIMO transmission with the BS (referred to as D-MIMO nodes), which are themselves users on the network. The D-MIMO nodes establish wireless connections with the BS, are generally near the BS, and ideally benefit from higher SNR links and better connections with edge-located UEs. These D-MIMO nodes can be existing handset UEs, Unmanned Aerial Vehicles (UAVs), or Vehicular UEs. Since the D-MIMO nodes are users sharing the access channel, the proposed architecture operates in two phases. First, the BS communicates with the D-MIMO nodes to forward data for the joint transmission, and then the BS and D-MIMO nodes jointly serve the UEs through coherent D-MIMO operation. Capacity analysis of this architecture is studied based on realistic 3GPP channel models, and the paper demonstrates that despite the two-phase operation, the proposed architecture enhances the system's capacity compared to the baseline where the BS communicates directly with the UEs.
Abstract:Integration of artificial intelligence (AI) and machine learning (ML) into the air interface has been envisioned as a key technology for next-generation (NextG) cellular networks. At the air interface, multiple-input multiple-output (MIMO) and its variants such as multi-user MIMO (MU-MIMO) and massive/full-dimension MIMO have been key enablers across successive generations of cellular networks with evolving complexity and design challenges. Initiating active investigation into leveraging AI/ML tools to address these challenges for MIMO becomes a critical step towards an AI-enabled NextG air interface. At the NextG air interface, the underlying wireless environment will be extremely dynamic with operation adaptations performed on a sub-millisecond basis by MIMO operations such as MU-MIMO scheduling and rank/link adaptation. Given the enormously large number of operation adaptation possibilities, we contend that online real-time AI/ML-based approaches constitute a promising paradigm. To this end, we outline the inherent challenges and offer insights into the design of such online real-time AI/ML-based solutions for MIMO operations. An online real-time AI/ML-based method for MIMO-OFDM channel estimation is then presented, serving as a potential roadmap for developing similar techniques across various MIMO operations in NextG.
Abstract:It is well known that index (discrete-time)-limited sampled sequences leak outside the support set when a band-limiting operation is applied. Similarly, a fractional shift causes an index-limited sequence to be infinite in extent due to the inherent band-limiting. Index-limited versions of discrete prolate spheroidal sequences (DPSS) are known to experience minimum leakage after band-limiting. In this work, we consider the effect of a half-sample shift and provide upper bounds on the resulting leakage energy for arbitrary sequences. Furthermore, we find an orthonormal basis derived from DPSS with members ordered according to energy concentration after half sample shifts; the primary (first) member being the global optimum.
Abstract:Jamming and intrusion detection are critical in 5G research, aiming to maintain reliability, prevent user experience degradation, and avoid infrastructure failure. This paper introduces an anonymous jamming detection model for 5G based on signal parameters from the protocol stacks. The system uses supervised and unsupervised learning for real-time, high-accuracy detection of jamming, including unknown types. Supervised models reach an AUC of 0.964 to 1, compared to LSTM models with an AUC of 0.923 to 1. However, the need for data annotation limits the supervised approach. To address this, an unsupervised auto-encoder-based anomaly detection is presented with an AUC of 0.987. The approach is resistant to adversarial training samples. For transparency and domain knowledge injection, a Bayesian network-based causation analysis is introduced.