Abstract:Osteoporosis is a common condition that increases fracture risk, especially in older adults. Early diagnosis is vital for preventing fractures, reducing treatment costs, and preserving mobility. However, healthcare providers face challenges like limited labeled data and difficulties in processing medical images. This study presents a novel multi-modal learning framework that integrates clinical and imaging data to improve diagnostic accuracy and model interpretability. The model utilizes three pre-trained networks-VGG19, InceptionV3, and ResNet50-to extract deep features from X-ray images. These features are transformed using PCA to reduce dimensionality and focus on the most relevant components. A clustering-based selection process identifies the most representative components, which are then combined with preprocessed clinical data and processed through a fully connected network (FCN) for final classification. A feature importance plot highlights key variables, showing that Medical History, BMI, and Height were the main contributors, emphasizing the significance of patient-specific data. While imaging features were valuable, they had lower importance, indicating that clinical data are crucial for accurate predictions. This framework promotes precise and interpretable predictions, enhancing transparency and building trust in AI-driven diagnoses for clinical integration.
Abstract:Parkinson's disease (PD), a severe and progressive neurological illness, affects millions of individuals worldwide. For effective treatment and management of PD, an accurate and early diagnosis is crucial. This study presents a deep learning-based model for the diagnosis of PD using resting state electroencephalogram (EEG) signal. The objective of the study is to develop an automated model that can extract complex hidden nonlinear features from EEG and demonstrate its generalizability on unseen data. The model is designed using a hybrid model, consists of convolutional neural network (CNN), bidirectional gated recurrent unit (Bi-GRU), and attention mechanism. The proposed method is evaluated on three public datasets (Uc San Diego Dataset, PRED-CT, and University of Iowa (UI) dataset), with one dataset used for training and the other two for evaluation. The results show that the proposed model can accurately diagnose PD with high performance on both the training and hold-out datasets. The model also performs well even when some part of the input information is missing. The results of this work have significant implications for patient treatment and for ongoing investigations into the early detection of Parkinson's disease. The suggested model holds promise as a non-invasive and reliable technique for PD early detection utilizing resting state EEG.
Abstract:Oscillometric monitors are the most common automated blood pressure (BP) measurement devices used in non-specialist settings. However, their accuracy and reliability vary under different settings and for different age groups and health conditions. A main limitation of the existing oscillometric monitors is their underlying analysis algorithms that are unable to fully capture the BP information encoded in the pattern of the recorded oscillometric pulses. In this paper, we propose a new 2D oscillometric data representation that enables a full characterization of arterial system and empowers the application of deep learning to extract the most informative features correlated with BP. A hybrid convolutional-recurrent neural network was developed to capture the oscillometric pulses morphological information as well as their temporal evolution over the cuff deflation period from the 2D structure, and estimate BP. The performance of the proposed method was verified on three oscillometric databases collected from the wrist and upper arms of 245 individuals. It was found that it achieves a mean error and a standard deviation of error of as low as 0.08 mmHg and 2.4 mmHg in the estimation of systolic BP, and 0.04 mmHg and 2.2 mmHg in the estimation of diastolic BP, respectively. Our proposed method outperformed the state-of-the-art techniques and satisfied the current international standards for BP monitors by a wide margin. The proposed method shows promise toward robust and objective BP estimation in a variety of patients and monitoring situations.
Abstract:Segmentation of lung tissue in computed tomography (CT) images is a precursor to most pulmonary image analysis applications. Semantic segmentation methods using deep learning have exhibited top-tier performance in recent years. This paper presents a fully automatic method for identifying the lungs in three-dimensional (3D) pulmonary CT images, which we call it Lung-Net. We conjectured that a significant deeper network with inceptionV3 units can achieve a better feature representation of lung CT images without increasing the model complexity in terms of the number of trainable parameters. The method has three main advantages. First, a U-Net architecture with InceptionV3 blocks is developed to resolve the problem of performance degradation and parameter overload. Then, using information from consecutive slices, a new data structure is created to increase generalization potential, allowing more discriminating features to be extracted by making data representation as efficient as possible. Finally, the robustness of the proposed segmentation framework was quantitatively assessed using one public database to train and test the model (LUNA16) and two public databases (ISBI VESSEL12 challenge and CRPF dataset) only for testing the model; each database consists of 700, 23, and 40 CT images, respectively, that were acquired with a different scanner and protocol. Based on the experimental results, the proposed method achieved competitive results over the existing techniques with Dice coefficient of 99.7, 99.1, and 98.8 for LUNA16, VESSEL12, and CRPF datasets, respectively. For segmenting lung tissue in CT images, the proposed model is efficient in terms of time and parameters and outperforms other state-of-the-art methods. Additionally, this model is publicly accessible via a graphical user interface.
Abstract:In recent years, cardiovascular diseases (CVDs) have become one of the leading causes of mortality globally. CVDs appear with minor symptoms and progressively get worse. The majority of people experience symptoms such as exhaustion, shortness of breath, ankle swelling, fluid retention, and other symptoms when starting CVD. Coronary artery disease (CAD), arrhythmia, cardiomyopathy, congenital heart defect (CHD), mitral regurgitation, and angina are the most common CVDs. Clinical methods such as blood tests, electrocardiography (ECG) signals, and medical imaging are the most effective methods used for the detection of CVDs. Among the diagnostic methods, cardiac magnetic resonance imaging (CMR) is increasingly used to diagnose, monitor the disease, plan treatment and predict CVDs. Coupled with all the advantages of CMR data, CVDs diagnosis is challenging for physicians due to many slices of data, low contrast, etc. To address these issues, deep learning (DL) techniques have been employed to the diagnosis of CVDs using CMR data, and much research is currently being conducted in this field. This review provides an overview of the studies performed in CVDs detection using CMR images and DL techniques. The introduction section examined CVDs types, diagnostic methods, and the most important medical imaging techniques. In the following, investigations to detect CVDs using CMR images and the most significant DL methods are presented. Another section discussed the challenges in diagnosing CVDs from CMR data. Next, the discussion section discusses the results of this review, and future work in CVDs diagnosis from CMR images and DL techniques are outlined. The most important findings of this study are presented in the conclusion section.