Abstract:Many instruments performing optical and non-optical imaging and sensing, such as Optical Coherence Tomography (OCT), Magnetic Resonance Imaging or Fourier-transform spectrometry, produce digital signals containing modulations, sine-like components, which only after Fourier transformation give information about the structure or characteristics of the investigated object. Due to the fundamental physics-related limitations of such methods, the distribution of these signal components is often nonlinear and, when not properly compensated, leads to the resolution, precision or quality drop in the final image. Here, we propose an innovative approach that has the potential to allow cleaning of the signal from the nonlinearities but most of all, it now allows to switch the given order off, leaving all others intact. The latter provides a tool for more in-depth analysis of the nonlinearity-inducing properties of the investigated object, which can lead to applications in early disease detection or more sensitive sensing of chemical compounds. We consider OCT signals and nonlinearities up to the third order. In our approach, we propose two neural networks: one to remove solely the second-order nonlinearity and the other for removing solely the third-order nonlinearity. The input of the networks is a novel two-dimensional data structure with all the information needed for the network to infer a nonlinearity-free signal. We describe the developed networks and present the results for second-order and third-order nonlinearity removal in OCT data representing the images of various objects: a mirror, glass, and fruits.
Abstract:Artefacts in quantum-mimic Optical Coherence Tomography are considered detrimental because they scramble the images even for the simplest objects. They are a side effect of autocorrelation which is used in the quantum entanglement mimicking algorithm behind this method. Interestingly, the autocorrelation imprints certain characteristics onto an artefact - it makes its shape and characteristics depend on the amount of dispersion exhibited by the layer that artefact corresponds to. This unique relationship between the artefact and the layer's dispersion can be used to determine Group Velocity Dispersion (GVD) values of object layers and, based on them, build a dispersion-contrasted depth profile. The retrieval of GVD profiles is achieved via Machine Learning. During training, a neural network learns the relationship between GVD and the artefacts' shape and characteristics, and consequently, it is able to provide a good qualitative representation of object's dispersion profile for never-seen-before data: computer-generated single dispersive layers and experimental pieces of glass.