Abstract:Every year, humanity loses about 1.5 million persons due to diabetic disease. Therefore continuous monitoring of diabetes is highly needed, but the conventional approach, i.e., fingertip pricking, causes mental and physical pain to the patient. This work introduces painless and cheaper non-invasive blood glucose level monitoring, Exploiting the advancement and huge progress in deep learning to develop a hybrid convolution neural network (CNN) - gate recurrent unit (GRU) network to hit the targeted system, The proposed system deploys CNN for extracting spatial patterns in the photoplethysmogram (PPG) signal and GRU is used for detecting the temporal patterns. The performance of the proposed system achieves a Mean Absolute Error (MAE) of 2.96 mg/dL, a mean square error (MSE) of 15.53 mg/dL, a root mean square Error (RMSE) of 3.94 mg/dL, and a coefficient of determination ($R^2$ score) of 0.97 on the test dataset. According to the Clarke Error Grid analysis, 100% of points fall within the clinically acceptable zone (Class A)
Abstract:Diabetic retinopathy (DR) is a significant cause of vision impairment, emphasizing the critical need for early detection and timely intervention to avert visual deterioration. Diagnosing DR is inherently complex, as it necessitates the meticulous examination of intricate retinal images by experienced specialists. This makes the early diagnosis of DR essential for effective treatment and the prevention of eventual blindness. Traditional diagnostic methods, relying on human interpretation of these medical images, face challenges in terms of accuracy and efficiency. In the present research, we introduce a novel method that offers superior precision in DR diagnosis, compared to these traditional methods, by employing advanced deep learning techniques. Central to this approach is the concept of transfer learning. This entails using pre-existing, well-established models, specifically InceptionResNetv2 and Inceptionv3, to extract features and fine-tune select layers to cater to the unique requirements of this specific diagnostic task. Concurrently, we also present a newly devised model, DiaCNN, which is tailored for the classification of eye diseases. To validate the efficacy of the proposed methodology, we leveraged the Ocular Disease Intelligent Recognition (ODIR) dataset, which comprises eight different eye disease categories. The results were promising. The InceptionResNetv2 model, incorporating transfer learning, registered an impressive 97.5% accuracy in both the training and testing phases. Its counterpart, the Inceptionv3 model, achieved an even more commendable 99.7% accuracy during training, and 97.5% during testing. Remarkably, the DiaCNN model showcased unparalleled precision, achieving 100% accuracy in training and 98.3\% in testing.