Abstract:We present a deep learning approach for laser speckle reduction ('DeepLSR') on images illuminated with a multi-wavelength, red-green-blue laser. We acquired a set of images from a variety of objects illuminated with laser light, both with and without optical speckle reduction, and an incoherent light-emitting diode. An adversarial network was then trained for paired image-to-image translation to transform images from a source domain of coherent illumination to a target domain of incoherent illumination. When applied to a new image set of coherently-illuminated test objects, this network reconstructs incoherently-illuminated images with an average peak signal-to-noise ratio and structural similarity index of 36 dB and 0.91, respectively, compared to 30 dB and 0.88 using optical speckle reduction, and 30 dB and 0.88 using non-local means processing. We demonstrate proof-of-concept for speckle-reduced laser endoscopy by applying DeepLSR to images of ex-vivo gastrointestinal tissue illuminated with a fiber-coupled laser source. For applications that require speckle-reduced imaging, DeepLSR holds promise to enable the use of coherent sources that are more stable, efficient, compact, and brighter than conventional alternatives.