Abstract:Evaluating radiology reports is a challenging problem as factual correctness is extremely important due to the need for accurate medical communication about medical images. Existing automatic evaluation metrics either suffer from failing to consider factual correctness (e.g., BLEU and ROUGE) or are limited in their interpretability (e.g., F1CheXpert and F1RadGraph). In this paper, we introduce GREEN (Generative Radiology Report Evaluation and Error Notation), a radiology report generation metric that leverages the natural language understanding of language models to identify and explain clinically significant errors in candidate reports, both quantitatively and qualitatively. Compared to current metrics, GREEN offers: 1) a score aligned with expert preferences, 2) human interpretable explanations of clinically significant errors, enabling feedback loops with end-users, and 3) a lightweight open-source method that reaches the performance of commercial counterparts. We validate our GREEN metric by comparing it to GPT-4, as well as to error counts of 6 experts and preferences of 2 experts. Our method demonstrates not only higher correlation with expert error counts, but simultaneously higher alignment with expert preferences when compared to previous approaches."
Abstract:Accurate quantification of cerebral blood flow (CBF) is essential for the diagnosis and assessment of a wide range of neurological diseases. Positron emission tomography (PET) with radiolabeled water (15O-water) is considered the gold-standard for the measurement of CBF in humans. PET imaging, however, is not widely available because of its prohibitive costs and use of short-lived radiopharmaceutical tracers that typically require onsite cyclotron production. Magnetic resonance imaging (MRI), in contrast, is more readily accessible and does not involve ionizing radiation. This study presents a convolutional encoder-decoder network with attention mechanisms to predict gold-standard 15O-water PET CBF from multi-sequence MRI scans, thereby eliminating the need for radioactive tracers. Inputs to the prediction model include several commonly used MRI sequences (T1-weighted, T2-FLAIR, and arterial spin labeling). The model was trained and validated using 5-fold cross-validation in a group of 126 subjects consisting of healthy controls and cerebrovascular disease patients, all of whom underwent simultaneous $15O-water PET/MRI. The results show that such a model can successfully synthesize high-quality PET CBF measurements (with an average SSIM of 0.924 and PSNR of 38.8 dB) and is more accurate compared to concurrent and previous PET synthesis methods. We also demonstrate the clinical significance of the proposed algorithm by evaluating the agreement for identifying the vascular territories with abnormally low CBF. Such methods may enable more widespread and accurate CBF evaluation in larger cohorts who cannot undergo PET imaging due to radiation concerns, lack of access, or logistic challenges.
Abstract:Accurate quantification of cerebral blood flow (CBF) is essential for the diagnosis and assessment of cerebrovascular diseases such as Moyamoya, carotid stenosis, aneurysms, and stroke. Positron emission tomography (PET) is currently regarded as the gold standard for the measurement of CBF in the human brain. PET imaging, however, is not widely available because of its prohibitive costs, use of ionizing radiation, and logistical challenges, which require a co-localized cyclotron to deliver the 2 min half-life Oxygen-15 radioisotope. Magnetic resonance imaging (MRI), in contrast, is more readily available and does not involve ionizing radiation. In this study, we propose a multi-task learning framework for brain MRI-to-PET translation and disease diagnosis. The proposed framework comprises two prime networks: (1) an attention-based 3D encoder-decoder convolutional neural network (CNN) that synthesizes high-quality PET CBF maps from multi-contrast MRI images, and (2) a multi-scale 3D CNN that identifies the brain disease corresponding to the input MRI images. Our multi-task framework yields promising results on the task of MRI-to-PET translation, achieving an average structural similarity index (SSIM) of 0.94 and peak signal-to-noise ratio (PSNR) of 38dB on a cohort of 120 subjects. In addition, we show that integrating multiple MRI modalities can improve the clinical diagnosis of brain diseases.