Abstract:Prostate cancer ranks among the leading health issues impacting men, with the Gleason scoring system serving as the primary method for diagnosis and prognosis. This system relies on expert pathologists to evaluate samples of prostate tissue and assign a Gleason grade, a task that requires significant time and manual effort. To address this challenge, artificial intelligence (AI) solutions have been explored to automate the grading process. In light of these challenges, this study evaluates and compares the effectiveness of three deep learning methodologies, YOLO, Vision Transformers, and Vision Mamba, in accurately classifying Gleason grades from histopathology images. The goal is to enhance diagnostic precision and efficiency in prostate cancer management. This study utilized two publicly available datasets, Gleason2019 and SICAPv2, to train and test the performance of YOLO, Vision Transformers, and Vision Mamba models. Each model was assessed based on its ability to classify Gleason grades accurately, considering metrics such as false positive rate, false negative rate, precision, and recall. The study also examined the computational efficiency and applicability of each method in a clinical setting. Vision Mamba demonstrated superior performance across all metrics, achieving high precision and recall rates while minimizing false positives and negatives. YOLO showed promise in terms of speed and efficiency, particularly beneficial for real-time analysis. Vision Transformers excelled in capturing long-range dependencies within images, although they presented higher computational complexity compared to the other models. Vision Mamba emerges as the most effective model for Gleason grade classification in histopathology images, offering a balance between accuracy and computational efficiency.
Abstract:Accurate segmentation of prostate cancer histopathology images is crucial for diagnosis and treatment planning. This study presents a comparative analysis of three deep learning-based methods, Mamba, SAM, and YOLO, for segmenting prostate cancer histopathology images. We evaluated the performance of these models on two comprehensive datasets, Gleason 2019 and SICAPv2, using Dice score, precision, and recall metrics. Our results show that the High-order Vision Mamba UNet (H-vmunet) model outperforms the other two models, achieving the highest scores across all metrics on both datasets. The H-vmunet model's advanced architecture, which integrates high-order visual state spaces and 2D-selective-scan operations, enables efficient and sensitive lesion detection across different scales. Our study demonstrates the potential of the H-vmunet model for clinical applications and highlights the importance of robust validation and comparison of deep learning-based methods for medical image analysis. The findings of this study contribute to the development of accurate and reliable computer-aided diagnosis systems for prostate cancer. The code is available at http://github.com/alibdz/prostate-segmentation.