Abstract:In histological pathology, frozen sections are often used for rapid diagnosis during surgeries, as they can be produced within minutes. However, they suffer from artifacts and often lack crucial diagnostic details, particularly within the cell nuclei region. Permanent sections, on the other hand, contain more diagnostic detail but require a time-intensive preparation process. Here, we present a generative deep learning approach to enhance frozen section images by leveraging guidance from permanent sections. Our method places a strong emphasis on the nuclei region, which contains critical information in both frozen and permanent sections. Importantly, our approach avoids generating artificial data in blank regions, ensuring that the network only enhances existing features without introducing potentially unreliable information. We achieve this through a segmented attention network, incorporating nuclei-segmented images during training and adding an additional loss function to refine the nuclei details in the generated permanent images. We validated our method across various tissues, including kidney, breast, and colon. This approach significantly improves histological efficiency and diagnostic accuracy, enhancing frozen section images within seconds, and seamlessly integrating into existing laboratory workflows.
Abstract:Histopathology plays a pivotal role in medical diagnostics. In contrast to preparing permanent sections for histopathology, a time-consuming process, preparing frozen sections is significantly faster and can be performed during surgery, where the sample scanning time should be optimized. Super-resolution techniques allow imaging the sample in lower magnification and sparing scanning time. In this paper, we present a new approach to super resolution for histopathological frozen sections, with focus on achieving better distortion measures, rather than pursuing photorealistic images that may compromise critical diagnostic information. Our deep-learning architecture focuses on learning the error between interpolated images and real images, thereby it generates high-resolution images while preserving critical image details, reducing the risk of diagnostic misinterpretation. This is done by leveraging the loss functions in the frequency domain, assigning higher weights to the reconstruction of complex, high-frequency components. In comparison to existing methods, we obtained significant improvements in terms of Structural Similarity Index (SSIM) and Peak Signal-to-Noise Ratio (PSNR), as well as indicated details that lost in the low-resolution frozen-section images, affecting the pathologist's clinical decisions. Our approach has a great potential in providing more-rapid frozen-section imaging, with less scanning, while preserving the high resolution in the imaged sample.
Abstract:We propose a new deep learning approach for medical imaging that copes with the problem of a small training set, the main bottleneck of deep learning, and apply it for classification of healthy and cancer cells acquired by quantitative phase imaging. The proposed method, called transferring of pre-trained generative adversarial network (TOP-GAN), is a hybridization between transfer learning and generative adversarial networks (GANs). Healthy cells and cancer cells of different metastatic potential have been imaged by low-coherence off-axis holography. After the acquisition, the optical path delay maps of the cells have been extracted and directly used as an input to the deep networks. In order to cope with the small number of classified images, we have used GANs to train a large number of unclassified images from another cell type (sperm cells). After this preliminary training, and after transforming the last layer of the network with new ones, we have designed an automatic classifier for the correct cell type (healthy/primary cancer/metastatic cancer) with 90-99% accuracy, although small training sets of down to several images have been used. These results are better in comparison to other classic methods that aim at coping with the same problem of a small training set. We believe that our approach makes the combination of holographic microscopy and deep learning networks more accessible to the medical field by enabling a rapid, automatic and accurate classification in stain-free imaging flow cytometry. Furthermore, our approach is expected to be applicable to many other medical image classification tasks, suffering from a small training set.