Abstract:We introduce a practical sign-dependent sequence selection metric for probabilistic amplitude shaping and propose a simple method to predict the gains in signal-to-noise ratio (SNR) for sequence selection. The proposed metric provides a $0.5$ dB SNR gain for single-polarized 256-QAM transmission over a long-haul fiber link.
Abstract:We introduce a Bayesian carrier phase recovery (CPR) algorithm which is robust against low signal-to-noise ratio scenarios. It is therefore effective for phase recovery for probabilistic amplitude shaping (PAS). Results validate that the new algorithm overcomes the degradation experienced by blind phase-search CPR for PAS.
Abstract:Cellular Internet-of-things (C-IoT) user equipments (UEs) typically transmit frequent but small amounts of uplink data to the base station. Undergoing a traditional random access procedure (RAP) to transmit a small but frequent data presents a considerable overhead. As an antidote, preconfigured uplink resources (PURs) are typically used in newer UEs, where the devices are allocated uplink resources beforehand to transmit on without following the RAP. A prerequisite for transmitting on PURs is that the UEs must use a valid timing advance (TA) so that they do not interfere with transmissions of other nodes in adjacent resources. One solution to this end is to validate the previously held TA by the UE to ensure that it is still valid. While this validation is trivial for stationary UEs, mobile UEs often encounter conditions where the previous TA is no longer valid and a new one is to be requested by falling back on legacy RAP. This limits the applicability of PURs in mobile UEs. To counter this drawback and ensure a near-universal adoption of transmitting on PURs, we propose new machine learning aided solutions for validation and prediction of TA for UEs of any type of mobility. We conduct comprehensive simulation evaluations across different types of communication environments to demonstrate that our proposed solutions provide up to a 98.7% accuracy in predicting the TA.
Abstract:Non-terrestrial networks (NTNs) complement their terrestrial counterparts in enabling ubiquitous connectivity globally by serving unserved and/or underserved areas of the world. While supporting enhanced mobile broadband (eMBB) data over NTNs has been extensively studied in the past, focus on massive machine type communication (mMTC) over NTNs is currently growing, as also witnessed by the new study and work items included into the 3rd generation partnership project (3GPP) agenda for commissioning specifications for Internet-of-Things (IoT) communications over NTNs. Supporting mMTC in non-terrestrial cellular IoT (C-IoT) networks requires jointly addressing the unique challenges introduced in NTNs and CIoT communications. In this paper, we tackle one such issue caused due to the extended round-trip time and increased path loss in NTNs resulting in a degraded network throughput. We propose smarter transport blocks scheduling methods that can increase the efficiency of resource utilization. We conduct end-to-end link-level simulations of C-IoT traffic over NTNs and present numerical results of the data rate gains achieved to show the performance of our proposed solutions against legacy scheduling methods.
Abstract:Fiber nonlinearity effects cap achievable rates and ranges in long-haul optical fiber communication links. Conventional nonlinearity compensation methods, such as perturbation theory-based nonlinearity compensation (PB-NLC), attempt to compensate for the nonlinearity by approximating analytical solutions to the signal propagation over optical fibers. However, their practical usability is limited by model mismatch and the immense computational complexity associated with the analytical computation of perturbation triplets and the nonlinearity distortion field. Recently, machine learning techniques have been used to optimise parameters of PB-based approaches, which traditionally have been determined analytically from physical models. It has been claimed in the literature that the learned PB-NLC approaches have improved performance and/or reduced computational complexity over their non-learned counterparts. In this paper, we first revisit the acclaimed benefits of the learned PB-NLC approaches by carefully carrying out a comprehensive performance-complexity analysis utilizing state-of-the-art complexity reduction methods. Interestingly, our results show that least squares-based PB-NLC with clustering quantization has the best performance-complexity trade-off among the learned PB-NLC approaches. Second, we advance the state-of-the-art of learned PB-NLC by proposing and designing a fully learned structure. We apply a bi-directional recurrent neural network for learning perturbation triplets that are alike those obtained from the analytical computation and are used as input features for the neural network to estimate the nonlinearity distortion field. Finally, we demonstrate through numerical simulations that our proposed fully learned approach achieves an improved performance-complexity trade-off compared to the existing learned and non-learned PB-NLC techniques.
Abstract:Several machine learning inspired methods for perturbation-based fiber nonlinearity (PBNLC) compensation have been presented in recent literature. We critically revisit acclaimed benefits of those over non-learned methods. Numerical results suggest that learned linear processing of perturbation triplets of PB-NLC is preferable over feedforward neural-network solutions.
Abstract:Overcoming fiber nonlinearity is one of the core challenges limiting the capacity of optical fiber communication systems. Machine learning based solutions such as learned digital backpropagation (LDBP) and the recently proposed deep convolutional recurrent neural network (DCRNN) have been shown to be effective for fiber nonlinearity compensation (NLC). Incorporating distributed compensation of polarization mode dispersion (PMD) within the learned models can improve their performance even further but at the same time, it also couples the compensation of nonlinearity and PMD. Consequently, it is important to consider the time variation of PMD for such a joint compensation scheme. In this paper, we investigate the impact of PMD drift on the DCRNN model with distributed compensation of PMD. We propose a transfer learning based selective training scheme to adapt the learned neural network model to changes in PMD. We demonstrate that fine-tuning only a small subset of weights as per the proposed method is sufficient for adapting the model to PMD drift. Using decision directed feedback for online learning, we track continuous PMD drift resulting from a time-varying rotation of the state of polarization (SOP). We show that transferring knowledge from a pre-trained base model using the proposed scheme significantly reduces the re-training efforts for different PMD realizations. Applying the hinge model for SOP rotation, our simulation results show that the learned models maintain their performance gains while tracking the PMD.
Abstract:Probabilistic amplitude shaping (PAS) is a practical means to achieve a shaping gain in optical fiber communication. However, PAS and shaping in general also affect the signal-dependent generation of nonlinear interference. This provides an opportunity for nonlinearity mitigation through PAS, which is also referred to as a nonlinear shaping gain. In this paper, we introduce a linear lowpass filter model that relates transmitted symbol-energy sequences and nonlinear distortion experienced in an optical fiber channel. Based on this model, we conduct a nonlinearity analysis of PAS with respect to shaping blocklength and mapping strategy. Our model explains results and relationships found in literature and can be used as a design tool for PAS with improved nonlinearity tolerance. We use the model to introduce a new metric for PAS with sequence selection. We perform simulations of selection-based PAS with various amplitude shapers and mapping strategies to demonstrate the effectiveness of the new metric in different optical fiber system scenarios.
Abstract:Monitoring grid assets continuously is critical in ensuring the reliable operation of the electricity grid system and improving its resilience in case of a defect. In light of several asset monitoring techniques in use, power line communication (PLC) enables a low-cost cable diagnostics solution by re-using smart grid data communication modems to also infer the cable health using the inherently estimated communication channel state information. Traditional PLC-based cable diagnostics solutions are dependent on prior knowledge of the cable type, network topology, and/or characteristics of the anomalies. In contrast, we develop an asset monitoring technique in this paper that can detect various types of anomalies in the grid without any prior domain knowledge. To this end, we design a solution that first uses time-series forecasting to predict the PLC channel state information at any given point in time based on its historical data. Under the assumption that the prediction error follows a Gaussian distribution, we then perform chi-squared statistical test to determine the significance level of the resultant Mahalanobis distance to build our anomaly detector. We demonstrate the effectiveness and universality of our solution via evaluations conducted using both synthetic and real-world data extracted from low- and medium-voltage distribution networks.
Abstract:Derived from the regular perturbation treatment of the nonlinear Schrodinger equation, a machine learning-based scheme to mitigate the intra-channel optical fiber nonlinearity is proposed. Referred to as the perturbation theory-aided (PA) learned digital back-propagation (LDBP), the proposed scheme constructs a deep neural network (DNN) in a way similar to the split-step Fourier method: linear and nonlinear operations alternate. Inspired by the perturbation analysis, the intra-channel cross-phase modulation term is conveniently represented by matrix operations in the DNN. The introduction of this term in each nonlinear operation considerably improves the performance, as well as enables the flexibility of PA-LDBP by adjusting the numbers of spans per step. The proposed scheme is evaluated by numerical simulations of a single carrier optical fiber communication system operating at 32 Gbaud with 64-quadrature amplitude modulation and 20*80 km transmission distance. The results show that the proposed scheme achieves approximately 3.5 dB, 1.8 dB, 1.4 dB, and 0.5 dB performance gain in terms of Q2 factor over the linear compensation, when the numbers of spans per step are 1, 2, 4, and 10, respectively. Two methods are proposed to reduce the complexity of PALDBP, i.e., pruning the number of perturbation coefficients and chromatic dispersion compensation in the frequency domain for multi-span per step cases. Investigation of the performance and complexity suggests that PA-LDBP attains improved performance gains with reduced complexity when compared to LDBP in the cases of 4 and 10 spans per step.