Abstract:This paper offers a thorough analysis of the coverage performance of Low Earth Orbit (LEO) satellite networks using a strongest satellite association approach, with a particular emphasis on shadowing effects modeled through a Poisson point process (PPP)-based network framework. We derive an analytical expression for the coverage probability, which incorporates key system parameters and a distance-dependent shadowing probability function, explicitly accounting for both line-of-sight and non-line-of-sight propagation channels. To enhance the practical relevance of our findings, we provide both lower and upper bounds for the coverage probability and introduce a closed-form solution based on a simplified shadowing model. Our analysis reveals several important network design insights, including the enhancement of coverage probability by distance-dependent shadowing effects and the identification of an optimal satellite altitude that balances beam gain benefits with interference drawbacks. Notably, our PPP-based network model shows strong alignment with other established models, confirming its accuracy and applicability across a variety of satellite network configurations. The insights gained from our analysis are valuable for optimizing LEO satellite deployment strategies and improving network performance in diverse scenarios.
Abstract:Low Earth orbit (LEO) satellite networks with mega constellations have the potential to provide 5G and beyond services ubiquitously. However, these networks may introduce mutual interference to both satellite and terrestrial networks, particularly when sharing spectrum resources. In this paper, we present a system-level performance analysis to address these interference issues using the tool of stochastic geometry. We model the spatial distributions of satellites, satellite users, terrestrial base stations (BSs), and terrestrial users using independent Poisson point processes on the surfaces of concentric spheres. Under these spatial models, we derive analytical expressions for the ergodic spectral efficiency of uplink (UL) and downlink (DL) satellite networks when they share spectrum with both UL and DL terrestrial networks. These derived ergodic expressions capture comprehensive network parameters, including the densities of satellite and terrestrial networks, the path-loss exponent, and fading. From our analysis, we determine the conditions under which spectrum sharing with UL terrestrial networks is advantageous for both UL and DL satellite networks. Our key finding is that the optimal spectrum sharing configuration among the four possible configurations depends on the density ratio between terrestrial BSs and users, providing a design guideline for spectrum management. Simulation results confirm the accuracy of our derived expressions.
Abstract:In frequency-division duplexing (FDD) multiple-input multiple-output (MIMO) systems, obtaining accurate downlink channel state information (CSI) for precoding is vastly challenging due to the tremendous feedback overhead with the growing number of antennas. Utilizing uplink pilots for downlink CSI estimation is a promising approach that can eliminate CSI feedback. However, the downlink CSI estimation accuracy diminishes significantly as the number of channel paths increases, resulting in reduced spectral efficiency. In this paper, we demonstrate that achieving downlink spectral efficiency comparable to perfect CSI is feasible by combining uplink CSI with limited downlink CSI feedback information. Our proposed downlink CSI feedback strategy transmits quantized phase information of downlink channel paths, deviating from conventional limited methods. We put forth a mean square error (MSE)-optimal downlink channel reconstruction method by jointly exploiting the uplink CSI and the limited downlink CSI. Armed with the MSE-optimal estimator, we derive the MSE as a function of the number of feedback bits for phase quantization. Subsequently, we present an optimal feedback bit allocation method for minimizing the MSE in the reconstructed channel through phase quantization. Utilizing a robust downlink precoding technique, we establish that the proposed downlink channel reconstruction method is sufficient for attaining a sum-spectral efficiency comparable to perfect CSI.
Abstract:This paper investigates full-duplex (FD) multi-user multiple-input multiple-output (MU-MIMO) system design with coarse quantization. We first analyze the impact of self-interference (SI) on quantization in FD single-input single-output systems. The analysis elucidates that the minimum required number of analog-to-digital converter (ADC) bits is logarithmically proportional to the ratio of total received power to the received power of desired signals. Motivated by this, we design a FD MIMO beamforming method that effectively manages the SI. Dividing a spectral efficiency maximization beamforming problem into two sub-problems for alternating optimization, we address the first by optimizing the precoder: obtaining a generalized eigenvalue problem from the first-order optimality condition, where the principal eigenvector is the optimal stationary solution, and adopting a power iteration method to identify this eigenvector. Subsequently, a quantization-aware minimum mean square error combiner is computed for the derived precoder. Through numerical studies, we observe that the proposed beamformer reduces the minimum required number of ADC bits for achieving higher spectral efficiency than that of half-duplex (HD) systems, compared to FD benchmarks. The overall analysis shows that, unlike with quantized HD systems, more than 6 bits are required for the ADC to fully realize the potential of the quantized FD system.
Abstract:Nonlinear self-interference cancellation (SIC) is essential for full-duplex communication systems, which can offer twice the spectral efficiency of traditional half-duplex systems. The challenge of nonlinear SIC is similar to the classic problem of system identification in adaptive filter theory, whose crux lies in identifying the optimal nonlinear basis functions for a nonlinear system. This becomes especially difficult when the system input has a non-stationary distribution. In this paper, we propose a novel algorithm for nonlinear digital SIC that adaptively constructs orthonormal polynomial basis functions according to the non-stationary moments of the transmit signal. By combining these basis functions with the least mean squares (LMS) algorithm, we introduce a new SIC technique, called as the adaptive orthonormal polynomial LMS (AOP-LMS) algorithm. To reduce computational complexity for practical systems, we augment our approach with a precomputed look-up table, which maps a given modulation and coding scheme to its corresponding basis functions. Numerical simulation indicates that our proposed method surpasses existing state-of-the-art SIC algorithms in terms of convergence speed and mean squared error when the transmit signal is non-stationary, such as with adaptive modulation and coding. Experimental evaluation with a wireless testbed confirms that our proposed approach outperforms existing digital SIC algorithms.
Abstract:Integrated sensing and communication (ISAC) is widely recognized as a fundamental enabler for future wireless communications. In this paper, we present a joint communication and radar beamforming framework for maximizing a sum spectral efficiency (SE) while guaranteeing desired radar performance with imperfect channel state information (CSI) in multi-user and multi-target ISAC systems. To this end, we adopt either a radar transmit beam mean square error (MSE) or receive signal-to-clutter-plus-noise ratio (SCNR) as a radar performance constraint of a sum SE maximization problem. To resolve inherent challenges such as non-convexity and imperfect CSI, we reformulate the problems and identify first-order optimality conditions for the joint radar and communication beamformer. Turning the condition to a nonlinear eigenvalue problem with eigenvector dependency (NEPv), we develop an alternating method which finds the joint beamformer through power iteration and a Lagrangian multiplier through binary search. The proposed framework encompasses both the radar metrics and is robust to channel estimation error with low complexity. Simulations validate the proposed methods. In particular, we observe that the MSE and SCNR constraints exhibit complementary performance depending on the operating environment, which manifests the importance of the proposed comprehensive and robust optimization framework.
Abstract:Full-duplex communication systems have the potential to achieve significantly higher data rates and lower latency compared to their half-duplex counterparts. This advantage stems from their ability to transmit and receive data simultaneously. However, to enable successful full-duplex operation, the primary challenge lies in accurately eliminating strong self-interference (SI). Overcoming this challenge involves addressing various issues, including the nonlinearity of power amplifiers, the time-varying nature of the SI channel, and the non-stationary transmit data distribution. In this article, we present a review of recent advancements in digital self-interference cancellation (SIC) algorithms. Our focus is on comparing the effectiveness of adaptable model-based SIC methods with their model-free counterparts that leverage data-driven machine learning techniques. Through our comparison study under practical scenarios, we demonstrate that the model-based SIC approach offers a more robust solution to the time-varying SI channel and the non-stationary transmission, achieving optimal SIC performance in terms of the convergence rate while maintaining low computational complexity. To validate our findings, we conduct experiments using a software-defined radio testbed that conforms to the IEEE 802.11a standards. The experimental results demonstrate the robustness of the model-based SIC methods, providing practical evidence of their effectiveness.
Abstract:With the growing interest in satellite networks, satellite-terrestrial integrated networks (STINs) have gained significant attention because of their potential benefits. However, due to the lack of a tractable network model for the STIN architecture, analytical studies allowing one to investigate the performance of such networks are not yet available. In this work, we propose a unified network model that jointly captures satellite and terrestrial networks into one analytical framework. Our key idea is based on Poisson point processes distributed on concentric spheres, assigning a random height to each point as a mark. This allows one to consider each point as a source of desired signal or a source of interference while ensuring visibility to the typical user. Thanks to this model, we derive the probability of coverage of STINs as a function of major system parameters, chiefly path-loss exponent, satellites and terrestrial base stations' height distributions and density, transmit power and biasing factors. Leveraging the analysis, we concretely explore two benefits that STINs provide: i) coverage extension in remote rural areas and ii) data offloading in dense urban areas.
Abstract:In the upcoming 6G era, multiple access (MA) will play an essential role in achieving high throughput performances required in a wide range of wireless applications. Since MA and interference management are closely related issues, the conventional MA techniques are limited in that they cannot provide near-optimal performance in universal interference regimes. Recently, rate-splitting multiple access (RSMA) has been gaining much attention. RSMA splits an individual message into two parts: a common part, decodable by every user, and a private part, decodable only by the intended user. Each user first decodes the common message and then decodes its private message by applying successive interference cancellation (SIC). By doing so, RSMA not only embraces the existing MA techniques as special cases but also provides significant performance gains by efficiently mitigating inter-user interference in a broad range of interference regimes. In this article, we first present the theoretical foundation of RSMA. Subsequently, we put forth four key benefits of RSMA: spectral efficiency, robustness, scalability, and flexibility. Upon this, we describe how RSMA can enable ten promising scenarios and applications along with future research directions to pave the way for 6G.
Abstract:In this paper, we propose a learning-based detection framework for uplink massive multiple-input and multiple-output (MIMO) systems with one-bit analog-to-digital converters. The learning-based detection only requires counting the occurrences of the quantized outputs of -1 and +1 for estimating a likelihood probability at each antenna. Accordingly, the key advantage of this approach is to perform maximum likelihood detection without explicit channel estimation which has been one of the primary challenges of one-bit quantized systems. The learning in the high signal-to-noise ratio (SNR) regime, however, needs excessive training to estimate the extremely small likelihood probabilities. To address this drawback, we propose a dither-and-learning technique to estimate likelihood functions from dithered signals. First, we add a dithering signal to artificially decrease the SNR and then infer the likelihood function from the quantized dithered signals by using an SNR estimate derived from a deep neural network-based offline estimator. We extend our technique by developing an adaptive dither-and-learning method that updates the dithering power according the patterns observed in the quantized dithered signals. The proposed framework is also applied to state-of-the-art channel-coded MIMO systems by computing a bit-wise and user-wise log-likelihood ratio from the refined likelihood probabilities. Simulation results validate the detection performance of the proposed methods in both uncoded and coded systems.