Abstract:Semantic communications based on deep joint source-channel coding (JSCC) aim to improve communication efficiency by transmitting only task-relevant information. However, ensuring robustness to the stochasticity of communication channels remains a key challenge in learning-based JSCC. In this paper, we propose a novel regularization technique for learning-based JSCC to enhance robustness against channel noise. The proposed method utilizes the Kullback-Leibler (KL) divergence as a regularizer term in the training loss, measuring the discrepancy between two posterior distributions: one under noisy channel conditions (noisy posterior) and one for a noise-free system (noise-free posterior). Reducing this KL divergence mitigates the impact of channel noise on task performance by keeping the noisy posterior close to the noise-free posterior. We further show that the expectation of the KL divergence given the encoded representation can be analytically approximated using the Fisher information matrix and the covariance matrix of the channel noise. Notably, the proposed regularization is architecture-agnostic, making it broadly applicable to general semantic communication systems over noisy channels. Our experimental results validate that the proposed regularization consistently improves task performance across diverse semantic communication systems and channel conditions.
Abstract:We propose a novel approach for channel state information (CSI) compression in multiple-input multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) systems, where the frequency-domain channel matrix is treated as a high-dimensional complex-valued image. Our method leverages transformer-based nonlinear transform coding (NTC), an advanced deep-learning-driven image compression technique that generates a highly compact binary representation of the CSI. Unlike conventional autoencoder-based CSI compression, NTC optimizes a nonlinear mapping to produce a latent vector while simultaneously estimating its probability distribution for efficient entropy coding. By exploiting the statistical independence of latent vector entries, we integrate a transformer-based deep neural network with a scalar nested-lattice uniform quantization scheme, enabling low-complexity, multi-rate CSI feedback that dynamically adapts to varying feedback channel conditions. The proposed multi-rate CSI compression scheme achieves state-of-the-art rate-distortion performance, outperforming existing techniques with the same number of neural network parameters. Simulation results further demonstrate that our approach provides a superior rate-distortion trade-off, requiring only 6% of the neural network parameters compared to existing methods, making it highly efficient for practical deployment.
Abstract:An estimation method is presented for polynomial phase signals, i.e., those adopting the form of a complex exponential whose phase is polynomial in its indices. Transcending the scope of existing techniques, the proposed estimator can handle an arbitrary number of dimensions and an arbitrary set of polynomial degrees along each dimension; the only requirement is that the number of observations per dimension exceeds the highest degree thereon. Embodied by a highly compact sequential algorithm, this estimator exhibits a strictly linear computational complexity in the number of observations, and is efficient at high signal-to-noise ratios (SNRs). To reinforce the performance at low and medium SNRs, where any phase estimator is bound to be hampered by the inherent ambiguity caused by phase wrappings, suitable functionalities are incorporated and shown to be highly effective.
Abstract:This paper offers a thorough analysis of the coverage performance of Low Earth Orbit (LEO) satellite networks using a strongest satellite association approach, with a particular emphasis on shadowing effects modeled through a Poisson point process (PPP)-based network framework. We derive an analytical expression for the coverage probability, which incorporates key system parameters and a distance-dependent shadowing probability function, explicitly accounting for both line-of-sight and non-line-of-sight propagation channels. To enhance the practical relevance of our findings, we provide both lower and upper bounds for the coverage probability and introduce a closed-form solution based on a simplified shadowing model. Our analysis reveals several important network design insights, including the enhancement of coverage probability by distance-dependent shadowing effects and the identification of an optimal satellite altitude that balances beam gain benefits with interference drawbacks. Notably, our PPP-based network model shows strong alignment with other established models, confirming its accuracy and applicability across a variety of satellite network configurations. The insights gained from our analysis are valuable for optimizing LEO satellite deployment strategies and improving network performance in diverse scenarios.
Abstract:Low Earth orbit (LEO) satellite networks with mega constellations have the potential to provide 5G and beyond services ubiquitously. However, these networks may introduce mutual interference to both satellite and terrestrial networks, particularly when sharing spectrum resources. In this paper, we present a system-level performance analysis to address these interference issues using the tool of stochastic geometry. We model the spatial distributions of satellites, satellite users, terrestrial base stations (BSs), and terrestrial users using independent Poisson point processes on the surfaces of concentric spheres. Under these spatial models, we derive analytical expressions for the ergodic spectral efficiency of uplink (UL) and downlink (DL) satellite networks when they share spectrum with both UL and DL terrestrial networks. These derived ergodic expressions capture comprehensive network parameters, including the densities of satellite and terrestrial networks, the path-loss exponent, and fading. From our analysis, we determine the conditions under which spectrum sharing with UL terrestrial networks is advantageous for both UL and DL satellite networks. Our key finding is that the optimal spectrum sharing configuration among the four possible configurations depends on the density ratio between terrestrial BSs and users, providing a design guideline for spectrum management. Simulation results confirm the accuracy of our derived expressions.
Abstract:Deep polar codes, employing multi-layered polar kernel pre-transforms in series, are recently introduced variants of pre-transformed polar codes. These codes have demonstrated the ability to reduce the number of minimum weight codewords, thereby closely achieving finite-block length capacity with successive cancellation list (SCL) decoders in certain scenarios. However, when the list size of the SCL decoder is small, which is crucial for low-latency communication applications, the reduction in the number of minimum weight codewords does not necessarily improve decoding performance. To address this limitation, we propose an alternative pre-transform technique to enhance the suitability of polar codes for SCL decoders with practical list sizes. Leveraging the fact that the SCL decoding error event set can be decomposed into two exclusive error event sets, our approach applies two different types of pre-transformations, each targeting the reduction of one of the two error event sets. Extensive simulation results under various block lengths and code rates have demonstrated that our codes consistently outperform all existing state-of-the-art pre-transformed polar codes, including CRC-aided polar codes and polarization-adjusted convolutional codes, when decoded using SCL decoders with small list sizes.
Abstract:In frequency-division duplexing (FDD) multiple-input multiple-output (MIMO) systems, obtaining accurate downlink channel state information (CSI) for precoding is vastly challenging due to the tremendous feedback overhead with the growing number of antennas. Utilizing uplink pilots for downlink CSI estimation is a promising approach that can eliminate CSI feedback. However, the downlink CSI estimation accuracy diminishes significantly as the number of channel paths increases, resulting in reduced spectral efficiency. In this paper, we demonstrate that achieving downlink spectral efficiency comparable to perfect CSI is feasible by combining uplink CSI with limited downlink CSI feedback information. Our proposed downlink CSI feedback strategy transmits quantized phase information of downlink channel paths, deviating from conventional limited methods. We put forth a mean square error (MSE)-optimal downlink channel reconstruction method by jointly exploiting the uplink CSI and the limited downlink CSI. Armed with the MSE-optimal estimator, we derive the MSE as a function of the number of feedback bits for phase quantization. Subsequently, we present an optimal feedback bit allocation method for minimizing the MSE in the reconstructed channel through phase quantization. Utilizing a robust downlink precoding technique, we establish that the proposed downlink channel reconstruction method is sufficient for attaining a sum-spectral efficiency comparable to perfect CSI.
Abstract:Distributed learning is commonly used for accelerating model training by harnessing the computational capabilities of multiple-edge devices. However, in practical applications, the communication delay emerges as a bottleneck due to the substantial information exchange required between workers and a central parameter server. SignSGD with majority voting (signSGD-MV) is an effective distributed learning algorithm that can significantly reduce communication costs by one-bit quantization. However, due to heterogeneous computational capabilities, it fails to converge when the mini-batch sizes differ among workers. To overcome this, we propose a novel signSGD optimizer with \textit{federated voting} (signSGD-FV). The idea of federated voting is to exploit learnable weights to perform weighted majority voting. The server learns the weights assigned to the edge devices in an online fashion based on their computational capabilities. Subsequently, these weights are employed to decode the signs of the aggregated local gradients in such a way to minimize the sign decoding error probability. We provide a unified convergence rate analysis framework applicable to scenarios where the estimated weights are known to the parameter server either perfectly or imperfectly. We demonstrate that the proposed signSGD-FV algorithm has a theoretical convergence guarantee even when edge devices use heterogeneous mini-batch sizes. Experimental results show that signSGD-FV outperforms signSGD-MV, exhibiting a faster convergence rate, especially in heterogeneous mini-batch sizes.
Abstract:Block orthogonal sparse superposition (BOSS) code is a class of joint coded modulation methods, which can closely achieve the finite-blocklength capacity with a low-complexity decoder at a few coding rates under Gaussian channels. However, for fading channels, the code performance degrades considerably because coded symbols experience different channel fading effects. In this paper, we put forth novel joint demodulation and decoding methods for BOSS codes under fading channels. For a fast fading channel, we present a minimum mean square error approximate maximum a posteriori (MMSE-A-MAP) algorithm for the joint demodulation and decoding when channel state information is available at the receiver (CSIR). We also propose a joint demodulation and decoding method without using CSIR for a block fading channel scenario. We refer to this as the non-coherent sphere decoding (NSD) algorithm. Simulation results demonstrate that BOSS codes with MMSE-A-MAP decoding outperform CRC-aided polar codes, while NSD decoding achieves comparable performance to quasi-maximum likelihood decoding with significantly reduced complexity. Both decoding algorithms are suitable for parallelization, satisfying low-latency constraints. Additionally, real-time simulations on a software-defined radio testbed validate the feasibility of using BOSS codes for low-power transmission.
Abstract:This paper investigates full-duplex (FD) multi-user multiple-input multiple-output (MU-MIMO) system design with coarse quantization. We first analyze the impact of self-interference (SI) on quantization in FD single-input single-output systems. The analysis elucidates that the minimum required number of analog-to-digital converter (ADC) bits is logarithmically proportional to the ratio of total received power to the received power of desired signals. Motivated by this, we design a FD MIMO beamforming method that effectively manages the SI. Dividing a spectral efficiency maximization beamforming problem into two sub-problems for alternating optimization, we address the first by optimizing the precoder: obtaining a generalized eigenvalue problem from the first-order optimality condition, where the principal eigenvector is the optimal stationary solution, and adopting a power iteration method to identify this eigenvector. Subsequently, a quantization-aware minimum mean square error combiner is computed for the derived precoder. Through numerical studies, we observe that the proposed beamformer reduces the minimum required number of ADC bits for achieving higher spectral efficiency than that of half-duplex (HD) systems, compared to FD benchmarks. The overall analysis shows that, unlike with quantized HD systems, more than 6 bits are required for the ADC to fully realize the potential of the quantized FD system.