Abstract:We experimentally demonstrate the joint optimization of transmitter and receiver parameters in directly modulated laser systems, showing superior performance compared to nonlinear receiver-only equalization while using fewer memory taps, less bandwidth, and lower radiofrequency power.
Abstract:Nowadays, as the ever-increasing demand for more powerful computing resources continues, alternative advanced computing paradigms are under extensive investigation. Significant effort has been made to deviate from conventional Von Neumann architectures. In-memory computing has emerged in the field of electronics as a possible solution to the infamous bottleneck between memory and computing processors, which reduces the effective throughput of data. In photonics, novel schemes attempt to collocate the computing processor and memory in a single device. Photonics offers the flexibility of multiplexing streams of data not only spatially and in time, but also in frequency or, equivalently, in wavelength, which makes it highly suitable for parallel computing. Here, we numerically show the use of time and wavelength division multiplexing (WDM) to solve four independent tasks at the same time in a single photonic chip, serving as a proof of concept for our proposal. The system is a time-delay reservoir computing (TDRC) based on a microring resonator (MRR). The addressed tasks cover different applications: Time-series prediction, waveform signal classification, wireless channel equalization, and radar signal prediction. The system is also tested for simultaneous computing of up to 10 instances of the same task, exhibiting excellent performance. The footprint of the system is reduced by using time-division multiplexing of the nodes that act as the neurons of the studied neural network scheme. WDM is used for the parallelization of wavelength channels, each addressing a single task. By adjusting the input power and frequency of each optical channel, we can achieve levels of performance for each of the tasks that are comparable to those quoted in state-of-the-art reports focusing on single-task operation...
Abstract:Silicon microring resonators (MRRs) have shown strong potential in acting as the nonlinear nodes of photonic reservoir computing (RC) schemes. By using nonlinearities within a silicon MRR, such as the ones caused by free-carrier dispersion (FCD) and thermo-optic (TO) effects, it is possible to map the input data of the RC to a higher dimensional space. Furthermore, by adding an external waveguide between the through and add ports of the MRR, it is possible to implement a time-delay RC (TDRC) with enhanced memory. The input from the through port is fed back into the add port of the ring with the delay applied by the external waveguide effectively adding memory. In a TDRC, the nodes are multiplexed in time, and their respective time evolutions are detected at the drop port. The performance of MRR-based TDRC is highly dependent on the amount of nonlinearity in the MRR. The nonlinear effects, in turn, are dependent on the physical properties of the MRR as they determine the lifetime of the effects. Another factor to take into account is the stability of the MRR response, as strong time-domain discontinuities at the drop port are known to emerge from FCD nonlinearities due to self-pulsing (high nonlinear behaviour). However, quantifying the right amount of nonlinearity that RC needs for a certain task in order to achieve optimum performance is challenging. Therefore, further analysis is required to fully understand the nonlinear dynamics of this TDRC setup. Here, we quantify the nonlinear and linear memory capacity of the previously described microring-based TDRC scheme, as a function of the time constants of the generated carriers and the thermal of the TO effects. We analyze the properties of the TDRC dynamics that generate the parameter space, in terms of input signal power and frequency detuning range, over which conventional RC tasks can be satisfactorily performed by the TDRC scheme.
Abstract:The rate and reach of directly-modulated laser links is often limited by the interplay between chirp and fiber chromatic dispersion. We address this by optimizing the transmitter, receiver, bias and peak-to-peak current to the laser jointly. Our approach outperforms Volterra post-equalization at various symbol rates.
Abstract:The use of directly modulated lasers (DMLs) is attractive in low-power, cost-constrained short-reach optical links. However, their limited modulation bandwidth can induce waveform distortion, undermining their data throughput. Traditional distortion mitigation techniques have relied mainly on the separate training of transmitter-side pre-distortion and receiver-side equalization. This approach overlooks the potential gains obtained by simultaneous optimization of transmitter (constellation and pulse shaping) and receiver (equalization and symbol demapping). Moreover, in the context of DML operation, the choice of laser-driving configuration parameters such as the bias current and peak-to-peak modulation current has a significant impact on system performance. We propose a novel end-to-end optimization approach for DML systems, incorporating the learning of bias and peak-to-peak modulation current to the optimization of constellation points, pulse shaping and equalization. The simulation of the DML dynamics is based on the use of the laser rate equations at symbol rates between 15 and 25 Gbaud. The resulting output sequences from the rate equations are used to build a differentiable data-driven model, simplifying the calculation of gradients needed for end-to-end optimization. The proposed end-to-end approach is compared to 3 additional benchmark approaches: the uncompensated system without equalization, a receiver-side finite impulse response equalization approach and an end-to-end approach with learnable pulse shape and nonlinear Volterra equalization but fixed bias and peak-to-peak modulation current. The numerical simulations on the four approaches show that the joint optimization of bias, peak-to-peak current, constellation points, pulse shaping and equalization outperforms all other approaches throughout the tested symbol rates.
Abstract:We numerically demonstrate a silicon add-drop microring-based reservoir computing scheme that combines parallel delayed inputs and wavelength division multiplexing. The scheme solves memory-demanding tasks like time-series prediction with good performance without requiring external optical feedback.
Abstract:We numerically demonstrate a microring-based time-delay reservoir computing scheme that simultaneously solves three tasks involving time-series prediction, classification, and wireless channel equalization. Each task performed on a wavelength-multiplexed channel achieves state-of-the-art performance with optimized power and frequency detuning.
Abstract:Microring resonators (MRRs) are promising devices for time-delay photonic reservoir computing, but the impact of the different physical effects taking place in the MRRs on the reservoir computing performance is yet to be fully understood. We numerically analyze the impact of linear losses as well as thermo-optic and free-carrier effects relaxation times on the prediction error of the time-series task NARMA-10. We demonstrate the existence of three regions, defined by the input power and the frequency detuning between the optical source and the microring resonance, that reveal the cavity transition from linear to nonlinear regimes. One of these regions offers very low error in time-series prediction under relatively low input power and number of nodes while the other regions either lack nonlinearity or become unstable. This study provides insight into the design of the MRR and the optimization of its physical properties for improving the prediction performance of time-delay reservoir computing.
Abstract:End-to-end learning has become a popular method for joint transmitter and receiver optimization in optical communication systems. Such approach may require a differentiable channel model, thus hindering the optimization of links based on directly modulated lasers (DMLs). This is due to the DML behavior in the large-signal regime, for which no analytical solution is available. In this paper, this problem is addressed by developing and comparing differentiable machine learning-based surrogate models. The models are quantitatively assessed in terms of root mean square error and training/testing time. Once the models are trained, the surrogates are then tested in a numerical equalization setup, resembling a practical end-to-end scenario. Based on the numerical investigation conducted, the convolutional attention transformer is shown to outperform the other models considered.
Abstract:We quantify the impact of thermo-optic and free-carrier effects on time-delay reservoir computing using a silicon microring resonator. We identify pump power and frequency detuning ranges with NMSE less than 0.05 for the NARMA-10 task depending on the time constants of the two considered effects.