Abstract:Despite continuous advancements in cancer treatment, brain metastatic disease remains a significant complication of primary cancer and is associated with an unfavorable prognosis. One approach for improving diagnosis, management, and outcomes is to implement algorithms based on artificial intelligence for the automated segmentation of both pre- and post-treatment MRI brain images. Such algorithms rely on volumetric criteria for lesion identification and treatment response assessment, which are still not available in clinical practice. Therefore, it is critical to establish tools for rapid volumetric segmentations methods that can be translated to clinical practice and that are trained on high quality annotated data. The BraTS-METS 2025 Lighthouse Challenge aims to address this critical need by establishing inter-rater and intra-rater variability in dataset annotation by generating high quality annotated datasets from four individual instances of segmentation by neuroradiologists while being recorded on video (two instances doing "from scratch" and two instances after AI pre-segmentation). This high-quality annotated dataset will be used for testing phase in 2025 Lighthouse challenge and will be publicly released at the completion of the challenge. The 2025 Lighthouse challenge will also release the 2023 and 2024 segmented datasets that were annotated using an established pipeline of pre-segmentation, student annotation, two neuroradiologists checking, and one neuroradiologist finalizing the process. It builds upon its previous edition by including post-treatment cases in the dataset. Using these high-quality annotated datasets, the 2025 Lighthouse challenge plans to test benchmark algorithms for automated segmentation of pre-and post-treatment brain metastases (BM), trained on diverse and multi-institutional datasets of MRI images obtained from patients with brain metastases.
Abstract:Purpose: To develop a deep learning model that predicts active inflammation from sacroiliac joint radiographs and to compare the success with radiologists. Materials and Methods: A total of 1,537 (augmented 1752) grade 0 SIJs of 768 patients were retrospectively analyzed. Gold-standard MRI exams showed active inflammation in 330 joints according to ASAS criteria. A convolutional neural network model (JointNET) was developed to detect MRI-based active inflammation labels solely based on radiographs. Two radiologists blindly evaluated the radiographs for comparison. Python, PyTorch, and SPSS were used for analyses. P<0.05 was considered statistically significant. Results: JointNET differentiated active inflammation from radiographs with a mean AUROC of 89.2 (95% CI:86.8%, 91.7%). The sensitivity was 69.0% (95% CI:65.3%, 72.7%) and specificity 90.4% (95% CI:87.8 % 92.9%). The mean accuracy was 90.2% (95% CI: 87.6%, 92.8%). The positive predictive value was 74.6% (95% CI: 72.5%, 76.7%) and negative predictive value was 87.9% (95% CI: 85.4%, 90.5%) when prevalence was considered 1%. Statistical analyses showed a significant difference between active inflammation and healthy groups (p<0.05). Radiologists accuracies were less than 65% to discriminate active inflammation from sacroiliac joint radiographs. Conclusion: JointNET successfully predicts active inflammation from sacroiliac joint radiographs, with superior performance to human observers.