Abstract:The quality of quantitative differential phase contrast reconstruction (qDPC) can be severely degenerated by the mismatch of the background of two oblique illuminated images, yielding problematic phase recovery results. These background mismatches may result from illumination patterns, inhomogeneous media distribution, or other defocusing layers. In previous reports, the background is manually calibrated which is time-consuming, and unstable, since new calibrations are needed if any modification to the optical system was made. It is also impossible to calibrate the background from the defocusing layers, or for high dynamic observation as the background changes over time. To tackle the mismatch of background and increases the experimental robustness, we propose the Retinex-qDPC in which we use the images edge features as data fidelity term yielding L2-Retinex-qDPC and L1-Retinex-qDPC for high background-robustness qDPC reconstruction. The split Bregman method is used to solve the L1-Retinex DPC. We compare both Retinex-qDPC models against state-of-the-art DPC reconstruction algorithms including total-variation regularized qDPC, and isotropic-qDPC using both simulated and experimental data. Results show that the Retinex qDPC can significantly improve the phase recovery quality by suppressing the impact of mismatch background. Within, the L1-Retinex-qDPC is better than L2-Retinex and other state-of-the-art DPC algorithms. In general, the Retinex-qDPC increases the experimental robustness against background illumination without any modification of the optical system, which will benefit all qDPC applications.
Abstract:Type 2 Diabetes (T2D) is a chronic metabolic disorder that can lead to blindness and cardiovascular disease. Information about early stage T2D might be present in retinal fundus images, but to what extent these images can be used for a screening setting is still unknown. In this study, deep neural networks were employed to differentiate between fundus images from individuals with and without T2D. We investigated three methods to achieve high classification performance, measured by the area under the receiver operating curve (ROC-AUC). A multi-target learning approach to simultaneously output retinal biomarkers as well as T2D works best (AUC = 0.746 [$\pm$0.001]). Furthermore, the classification performance can be improved when images with high prediction uncertainty are referred to a specialist. We also show that the combination of images of the left and right eye per individual can further improve the classification performance (AUC = 0.758 [$\pm$0.003]), using a simple averaging approach. The results are promising, suggesting the feasibility of screening for T2D from retinal fundus images.