Abstract:Retinal vessel segmentation is critical for diagnosing ocular conditions, yet current deep learning methods are limited by modality-specific challenges and significant distribution shifts across imaging devices, resolutions, and anatomical regions. In this paper, we propose GrInAdapt, a novel framework for source-free multi-target domain adaptation that leverages multi-view images to refine segmentation labels and enhance model generalizability for optical coherence tomography angiography (OCTA) of the fundus of the eye. GrInAdapt follows an intuitive three-step approach: (i) grounding images to a common anchor space via registration, (ii) integrating predictions from multiple views to achieve improved label consensus, and (iii) adapting the source model to diverse target domains. Furthermore, GrInAdapt is flexible enough to incorporate auxiliary modalities such as color fundus photography, to provide complementary cues for robust vessel segmentation. Extensive experiments on a multi-device, multi-site, and multi-modal retinal dataset demonstrate that GrInAdapt significantly outperforms existing domain adaptation methods, achieving higher segmentation accuracy and robustness across multiple domains. These results highlight the potential of GrInAdapt to advance automated retinal vessel analysis and support robust clinical decision-making.
Abstract:We present the development of SpeCamX, a mobile application that transforms any unmodified smartphone into a powerful multispectral imager capable of capturing multispectral information. Our application includes an augmented bilirubinometer, enabling accurate prediction of blood bilirubin levels (BBL). In a clinical study involving 320 patients with liver diseases, we used SpeCamX to image the bulbar conjunctiva region, and we employed a hybrid machine learning prediction model to predict BBL. We observed a high correlation with blood test results, demonstrating the efficacy of our approach. Furthermore, we compared our method, which uses spectrally augmented learning (SAL), with traditional learning based on RGB photographs (RGBL), and our results clearly indicate that SpeCamX outperforms RGBL in terms of prediction accuracy, efficiency, and stability. This study highlights the potential of SpeCamX to improve the prediction of bio-chromophores, and its ability to transform an ordinary smartphone into a powerful medical tool without the need for additional investments or expertise. This makes it suitable for widespread use, particularly in areas where medical resources are scarce.