Abstract:We experimentally demonstrate the joint optimization of transmitter and receiver parameters in directly modulated laser systems, showing superior performance compared to nonlinear receiver-only equalization while using fewer memory taps, less bandwidth, and lower radiofrequency power.
Abstract:This paper investigates the application of end-to-end (E2E) learning for joint optimization of pulse-shaper and receiver filter to reduce intersymbol interference (ISI) in bandwidth-limited communication systems. We investigate this in two numerical simulation models: 1) an additive white Gaussian noise (AWGN) channel with bandwidth limitation and 2) an intensity modulated direct detection (IM/DD) link employing an electro-absorption modulator. For both simulation models, we implement a wavelength division multiplexing (WDM) scheme to ensure that the learned filters adhere to the bandwidth constraints of the WDM channels. Our findings reveal that E2E learning greatly surpasses traditional single-sided transmitter pulse-shaper or receiver filter optimization methods, achieving significant performance gains in terms of symbol error rate with shorter filter lengths. These results suggest that E2E learning can decrease the complexity and enhance the performance of future high-speed optical communication systems.
Abstract:We present a comprehensive phase noise characterization of a mid-IR Cr:ZnS frequency comb. Despite their emergence as a platform for high-resolution dual-comb spectroscopy, detailed investigations into the phase noise of Cr:ZnS combs have been lacking. To address this, we use a recently proposed phase noise measurement technique that employs multi-heterodyne detection and subspace tracking. This allows for the measurement of the common mode, repetition-rate and high-order phase noise terms, and their corresponding scaling as a function of a comb-line number, using a single measurement set-up. We demonstrate that the comb under test is dominated by the common mode phase noise, while all the other phase noise terms are below the measurement noise floor (~ -120 dB rad^2/Hz), and are thereby not identifiable.
Abstract:Nowadays, as the ever-increasing demand for more powerful computing resources continues, alternative advanced computing paradigms are under extensive investigation. Significant effort has been made to deviate from conventional Von Neumann architectures. In-memory computing has emerged in the field of electronics as a possible solution to the infamous bottleneck between memory and computing processors, which reduces the effective throughput of data. In photonics, novel schemes attempt to collocate the computing processor and memory in a single device. Photonics offers the flexibility of multiplexing streams of data not only spatially and in time, but also in frequency or, equivalently, in wavelength, which makes it highly suitable for parallel computing. Here, we numerically show the use of time and wavelength division multiplexing (WDM) to solve four independent tasks at the same time in a single photonic chip, serving as a proof of concept for our proposal. The system is a time-delay reservoir computing (TDRC) based on a microring resonator (MRR). The addressed tasks cover different applications: Time-series prediction, waveform signal classification, wireless channel equalization, and radar signal prediction. The system is also tested for simultaneous computing of up to 10 instances of the same task, exhibiting excellent performance. The footprint of the system is reduced by using time-division multiplexing of the nodes that act as the neurons of the studied neural network scheme. WDM is used for the parallelization of wavelength channels, each addressing a single task. By adjusting the input power and frequency of each optical channel, we can achieve levels of performance for each of the tasks that are comparable to those quoted in state-of-the-art reports focusing on single-task operation...
Abstract:Silicon microring resonators (MRRs) have shown strong potential in acting as the nonlinear nodes of photonic reservoir computing (RC) schemes. By using nonlinearities within a silicon MRR, such as the ones caused by free-carrier dispersion (FCD) and thermo-optic (TO) effects, it is possible to map the input data of the RC to a higher dimensional space. Furthermore, by adding an external waveguide between the through and add ports of the MRR, it is possible to implement a time-delay RC (TDRC) with enhanced memory. The input from the through port is fed back into the add port of the ring with the delay applied by the external waveguide effectively adding memory. In a TDRC, the nodes are multiplexed in time, and their respective time evolutions are detected at the drop port. The performance of MRR-based TDRC is highly dependent on the amount of nonlinearity in the MRR. The nonlinear effects, in turn, are dependent on the physical properties of the MRR as they determine the lifetime of the effects. Another factor to take into account is the stability of the MRR response, as strong time-domain discontinuities at the drop port are known to emerge from FCD nonlinearities due to self-pulsing (high nonlinear behaviour). However, quantifying the right amount of nonlinearity that RC needs for a certain task in order to achieve optimum performance is challenging. Therefore, further analysis is required to fully understand the nonlinear dynamics of this TDRC setup. Here, we quantify the nonlinear and linear memory capacity of the previously described microring-based TDRC scheme, as a function of the time constants of the generated carriers and the thermal of the TO effects. We analyze the properties of the TDRC dynamics that generate the parameter space, in terms of input signal power and frequency detuning range, over which conventional RC tasks can be satisfactorily performed by the TDRC scheme.
Abstract:We numerically demonstrate that joint optimization of FIR based pulse-shaper and receiver filter results in an improved system performance, and shorter filter lengths (lower complexity), for 4-PAM 100 GBd IM/DD systems.
Abstract:The use of directly modulated lasers (DMLs) is attractive in low-power, cost-constrained short-reach optical links. However, their limited modulation bandwidth can induce waveform distortion, undermining their data throughput. Traditional distortion mitigation techniques have relied mainly on the separate training of transmitter-side pre-distortion and receiver-side equalization. This approach overlooks the potential gains obtained by simultaneous optimization of transmitter (constellation and pulse shaping) and receiver (equalization and symbol demapping). Moreover, in the context of DML operation, the choice of laser-driving configuration parameters such as the bias current and peak-to-peak modulation current has a significant impact on system performance. We propose a novel end-to-end optimization approach for DML systems, incorporating the learning of bias and peak-to-peak modulation current to the optimization of constellation points, pulse shaping and equalization. The simulation of the DML dynamics is based on the use of the laser rate equations at symbol rates between 15 and 25 Gbaud. The resulting output sequences from the rate equations are used to build a differentiable data-driven model, simplifying the calculation of gradients needed for end-to-end optimization. The proposed end-to-end approach is compared to 3 additional benchmark approaches: the uncompensated system without equalization, a receiver-side finite impulse response equalization approach and an end-to-end approach with learnable pulse shape and nonlinear Volterra equalization but fixed bias and peak-to-peak modulation current. The numerical simulations on the four approaches show that the joint optimization of bias, peak-to-peak current, constellation points, pulse shaping and equalization outperforms all other approaches throughout the tested symbol rates.
Abstract:The rate and reach of directly-modulated laser links is often limited by the interplay between chirp and fiber chromatic dispersion. We address this by optimizing the transmitter, receiver, bias and peak-to-peak current to the laser jointly. Our approach outperforms Volterra post-equalization at various symbol rates.
Abstract:We numerically demonstrate a silicon add-drop microring-based reservoir computing scheme that combines parallel delayed inputs and wavelength division multiplexing. The scheme solves memory-demanding tasks like time-series prediction with good performance without requiring external optical feedback.
Abstract:We numerically demonstrate a microring-based time-delay reservoir computing scheme that simultaneously solves three tasks involving time-series prediction, classification, and wireless channel equalization. Each task performed on a wavelength-multiplexed channel achieves state-of-the-art performance with optimized power and frequency detuning.