Abstract:This paper addresses the medical imaging problem of joint detection in the upper limbs, viz. elbow, shoulder, wrist and finger joints. Localization of joints from X-Ray and Computerized Tomography (CT) scans is an essential step for the assessment of various bone-related medical conditions like Osteoarthritis, Rheumatoid Arthritis, and can even be used for automated bone fracture detection. Automated joint localization also detects the corresponding bones and can serve as input to deep learning-based models used for the computerized diagnosis of the aforementioned medical disorders. This in-creases the accuracy of prediction and aids the radiologists with analyzing the scans, which is quite a complex and exhausting task. This paper provides a detailed comparative study between diverse Deep Learning (DL) models - YOLOv3, YOLOv7, EfficientDet and CenterNet in multiple bone joint detections in the upper limbs of the human body. The research analyses the performance of different DL models, mathematically, graphically and visually. These models are trained and tested on a portion of the openly available MURA (musculoskeletal radiographs) dataset. The study found that the best Mean Average Precision (mAP at 0.5:0.95) values of YOLOv3, YOLOv7, EfficientDet and CenterNet are 35.3, 48.3, 46.5 and 45.9 respectively. Besides, it has been found YOLOv7 performed the best for accurately predicting the bounding boxes while YOLOv3 performed the worst in the Visual Analysis test. Code available at https://github.com/Sohambasu07/BoneJointsLocalization
Abstract:Breast cancer classification stands as a pivotal pillar in ensuring timely diagnosis and effective treatment. This study with histopathological images underscores the profound significance of harnessing the synergistic capabilities of colour space ensembling and quantum-classical stacking to elevate the precision of breast cancer classification. By delving into the distinct colour spaces of RGB, HSV and CIE L*u*v, the authors initiated a comprehensive investigation guided by advanced methodologies. Employing the DenseNet121 architecture for feature extraction the authors have capitalized on the robustness of Random Forest, SVM, QSVC, and VQC classifiers. This research encompasses a unique feature fusion technique within the colour space ensemble. This approach not only deepens our comprehension of breast cancer classification but also marks a milestone in personalized medical assessment. The amalgamation of quantum and classical classifiers through stacking emerges as a potent catalyst, effectively mitigating the inherent constraints of individual classifiers, paving a robust path towards more dependable and refined breast cancer identification. Through rigorous experimentation and meticulous analysis, fusion of colour spaces like RGB with HSV and RGB with CIE L*u*v, presents an classification accuracy, nearing the value of unity. This underscores the transformative potential of our approach, where the fusion of diverse colour spaces and the synergy of quantum and classical realms converge to establish a new horizon in medical diagnostics. Thus the implications of this research extend across medical disciplines, offering promising avenues for advancing diagnostic accuracy and treatment efficacy.
Abstract:We live in a translingual society, in order to communicate with people from different parts of the world we need to have an expertise in their respective languages. Learning all these languages is not at all possible; therefore we need a mechanism which can do this task for us. Machine translators have emerged as a tool which can perform this task. In order to develop a machine translator we need to develop several different rules. The very first module that comes in machine translation pipeline is morphological analysis. Stemming and lemmatization comes under morphological analysis. In this paper we have created a lemmatizer which generates rules for removing the affixes along with the addition of rules for creating a proper root word.