Abstract:We experimentally demonstrate the joint optimization of transmitter and receiver parameters in directly modulated laser systems, showing superior performance compared to nonlinear receiver-only equalization while using fewer memory taps, less bandwidth, and lower radiofrequency power.
Abstract:Existing communication hardware is being exerted to its limits to accommodate for the ever increasing internet usage globally. This leads to non-linear distortion in the communication link that requires non-linear equalization techniques to operate the link at a reasonable bit error rate. This paper addresses the challenge of blind non-linear equalization using a variational autoencoder (VAE) with a second-order Volterra channel model. The VAE framework's costfunction, the evidence lower bound (ELBO), is derived for real-valued constellations and can be evaluated analytically without resorting to sampling techniques. We demonstrate the effectiveness of our approach through simulations on a synthetic Wiener-Hammerstein channel and a simulated intensity modulated direct detection (IM/DD) optical link. The results show significant improvements in equalization performance, compared to a VAE with linear channel assumptions, highlighting the importance of appropriate channel modeling in unsupervised VAE equalizer frameworks.
Abstract:This paper investigates the application of end-to-end (E2E) learning for joint optimization of pulse-shaper and receiver filter to reduce intersymbol interference (ISI) in bandwidth-limited communication systems. We investigate this in two numerical simulation models: 1) an additive white Gaussian noise (AWGN) channel with bandwidth limitation and 2) an intensity modulated direct detection (IM/DD) link employing an electro-absorption modulator. For both simulation models, we implement a wavelength division multiplexing (WDM) scheme to ensure that the learned filters adhere to the bandwidth constraints of the WDM channels. Our findings reveal that E2E learning greatly surpasses traditional single-sided transmitter pulse-shaper or receiver filter optimization methods, achieving significant performance gains in terms of symbol error rate with shorter filter lengths. These results suggest that E2E learning can decrease the complexity and enhance the performance of future high-speed optical communication systems.
Abstract:We present a comprehensive phase noise characterization of a mid-IR Cr:ZnS frequency comb. Despite their emergence as a platform for high-resolution dual-comb spectroscopy, detailed investigations into the phase noise of Cr:ZnS combs have been lacking. To address this, we use a recently proposed phase noise measurement technique that employs multi-heterodyne detection and subspace tracking. This allows for the measurement of the common mode, repetition-rate and high-order phase noise terms, and their corresponding scaling as a function of a comb-line number, using a single measurement set-up. We demonstrate that the comb under test is dominated by the common mode phase noise, while all the other phase noise terms are below the measurement noise floor (~ -120 dB rad^2/Hz), and are thereby not identifiable.
Abstract:Nowadays, as the ever-increasing demand for more powerful computing resources continues, alternative advanced computing paradigms are under extensive investigation. Significant effort has been made to deviate from conventional Von Neumann architectures. In-memory computing has emerged in the field of electronics as a possible solution to the infamous bottleneck between memory and computing processors, which reduces the effective throughput of data. In photonics, novel schemes attempt to collocate the computing processor and memory in a single device. Photonics offers the flexibility of multiplexing streams of data not only spatially and in time, but also in frequency or, equivalently, in wavelength, which makes it highly suitable for parallel computing. Here, we numerically show the use of time and wavelength division multiplexing (WDM) to solve four independent tasks at the same time in a single photonic chip, serving as a proof of concept for our proposal. The system is a time-delay reservoir computing (TDRC) based on a microring resonator (MRR). The addressed tasks cover different applications: Time-series prediction, waveform signal classification, wireless channel equalization, and radar signal prediction. The system is also tested for simultaneous computing of up to 10 instances of the same task, exhibiting excellent performance. The footprint of the system is reduced by using time-division multiplexing of the nodes that act as the neurons of the studied neural network scheme. WDM is used for the parallelization of wavelength channels, each addressing a single task. By adjusting the input power and frequency of each optical channel, we can achieve levels of performance for each of the tasks that are comparable to those quoted in state-of-the-art reports focusing on single-task operation...
Abstract:We numerically demonstrate that joint optimization of FIR based pulse-shaper and receiver filter results in an improved system performance, and shorter filter lengths (lower complexity), for 4-PAM 100 GBd IM/DD systems.
Abstract:The use of directly modulated lasers (DMLs) is attractive in low-power, cost-constrained short-reach optical links. However, their limited modulation bandwidth can induce waveform distortion, undermining their data throughput. Traditional distortion mitigation techniques have relied mainly on the separate training of transmitter-side pre-distortion and receiver-side equalization. This approach overlooks the potential gains obtained by simultaneous optimization of transmitter (constellation and pulse shaping) and receiver (equalization and symbol demapping). Moreover, in the context of DML operation, the choice of laser-driving configuration parameters such as the bias current and peak-to-peak modulation current has a significant impact on system performance. We propose a novel end-to-end optimization approach for DML systems, incorporating the learning of bias and peak-to-peak modulation current to the optimization of constellation points, pulse shaping and equalization. The simulation of the DML dynamics is based on the use of the laser rate equations at symbol rates between 15 and 25 Gbaud. The resulting output sequences from the rate equations are used to build a differentiable data-driven model, simplifying the calculation of gradients needed for end-to-end optimization. The proposed end-to-end approach is compared to 3 additional benchmark approaches: the uncompensated system without equalization, a receiver-side finite impulse response equalization approach and an end-to-end approach with learnable pulse shape and nonlinear Volterra equalization but fixed bias and peak-to-peak modulation current. The numerical simulations on the four approaches show that the joint optimization of bias, peak-to-peak current, constellation points, pulse shaping and equalization outperforms all other approaches throughout the tested symbol rates.
Abstract:The rate and reach of directly-modulated laser links is often limited by the interplay between chirp and fiber chromatic dispersion. We address this by optimizing the transmitter, receiver, bias and peak-to-peak current to the laser jointly. Our approach outperforms Volterra post-equalization at various symbol rates.
Abstract:We numerically demonstrate a microring-based time-delay reservoir computing scheme that simultaneously solves three tasks involving time-series prediction, classification, and wireless channel equalization. Each task performed on a wavelength-multiplexed channel achieves state-of-the-art performance with optimized power and frequency detuning.
Abstract:Microring resonators (MRRs) are promising devices for time-delay photonic reservoir computing, but the impact of the different physical effects taking place in the MRRs on the reservoir computing performance is yet to be fully understood. We numerically analyze the impact of linear losses as well as thermo-optic and free-carrier effects relaxation times on the prediction error of the time-series task NARMA-10. We demonstrate the existence of three regions, defined by the input power and the frequency detuning between the optical source and the microring resonance, that reveal the cavity transition from linear to nonlinear regimes. One of these regions offers very low error in time-series prediction under relatively low input power and number of nodes while the other regions either lack nonlinearity or become unstable. This study provides insight into the design of the MRR and the optimization of its physical properties for improving the prediction performance of time-delay reservoir computing.