Abstract:Background: Diabetic Sensorimotor polyneuropathy (DSPN) is a major long-term complication in diabetic patients associated with painful neuropathy, foot ulceration and amputation. The Michigan neuropathy screening instrument (MNSI) is one of the most common screening techniques for DSPN, however, it does not provide any direct severity grading system. Method: For designing and modelling the DSPN severity grading systems for MNSI, 19 years of data from Epidemiology of Diabetes Interventions and Complications (EDIC) clinical trials were used. MNSI variables and patient outcomes were investigated using machine learning tools to identify the features having higher association in DSPN identification. A multivariable logistic regression-based nomogram was generated and validated for DSPN severity grading. Results: The top-7 ranked features from MNSI: 10-gm filament, Vibration perception (R), Vibration perception (L), previous diabetic neuropathy, the appearance of deformities, appearance of callus and appearance of fissure were identified as key features for identifying DSPN using the extra tree model. The area under the curve (AUC) of the nomogram for the internal and external datasets were 0.9421 and 0.946, respectively. From the developed nomogram, the probability of having DSPN was predicted and a DSPN severity scoring system for MNSI was developed from the probability score. The model performance was validated on an independent dataset. Patients were stratified into four severity levels: absent, mild, moderate, and severe using a cut-off value of 10.5, 12.7 and 15 for a DSPN probability less than 50%, 75% to 90%, and above 90%, respectively. Conclusions: This study provides a simple, easy-to-use and reliable algorithm for defining the prognosis and management of patients with DSPN.
Abstract:Semantic image segmentation is one of the most important tasks in medical image analysis. Most state-of-the-art deep learning methods require a large number of accurately annotated examples for model training. However, accurate annotation is difficult to obtain especially in medical applications. In this paper, we propose a spatially constrained deep convolutional neural network (DCNN) to achieve smooth and robust image segmentation using inaccurately annotated labels for training. In our proposed method, image segmentation is formulated as a graph optimization problem that is solved by a DCNN model learning process. The cost function to be optimized consists of a unary term that is calculated by cross entropy measurement and a pairwise term that is based on enforcing a local label consistency. The proposed method has been evaluated based on corneal confocal microscopic (CCM) images for nerve fiber segmentation, where accurate annotations are extremely difficult to be obtained. Based on both the quantitative result of a synthetic dataset and qualitative assessment of a real dataset, the proposed method has achieved superior performance in producing high quality segmentation results even with inaccurate labels for training.