Abstract:Analyzing and predicting brain aging is essential for early prognosis and accurate diagnosis of cognitive diseases. The technique of neuroimaging, such as Magnetic Resonance Imaging (MRI), provides a noninvasive means of observing the aging process within the brain. With longitudinal image data collection, data-intensive Artificial Intelligence (AI) algorithms have been used to examine brain aging. However, existing state-of-the-art algorithms tend to be restricted to group-level predictions and suffer from unreal predictions. This paper proposes a methodology for generating longitudinal MRI scans that capture subject-specific neurodegeneration and retain anatomical plausibility in aging. The proposed methodology is developed within the framework of diffeomorphic registration and relies on three key novel technological advances to generate subject-level anatomically plausible predictions: i) a computationally efficient and individualized generative framework based on registration; ii) an aging generative module based on biological linear aging progression; iii) a quality control module to fit registration for generation task. Our methodology was evaluated on 2662 T1-weighted (T1-w) MRI scans from 796 participants from three different cohorts. First, we applied 6 commonly used criteria to demonstrate the aging simulation ability of the proposed methodology; Secondly, we evaluated the quality of the synthetic images using quantitative measurements and qualitative assessment by a neuroradiologist. Overall, the experimental results show that the proposed method can produce anatomically plausible predictions that can be used to enhance longitudinal datasets, in turn enabling data-hungry AI-driven healthcare tools.
Abstract:Deep learning (DL) methods have in recent years yielded impressive results in medical imaging, with the potential to function as clinical aid to radiologists. However, DL models in medical imaging are often trained on public research cohorts with images acquired with a single scanner or with strict protocol harmonization, which is not representative of a clinical setting. The aim of this study was to investigate how well a DL model performs in unseen clinical data sets---collected with different scanners, protocols and disease populations---and whether more heterogeneous training data improves generalization. In total, 3117 MRI scans of brains from multiple dementia research cohorts and memory clinics, that had been visually rated by a neuroradiologist according to Scheltens' scale of medial temporal atrophy (MTA), were included in this study. By training multiple versions of a convolutional neural network on different subsets of this data to predict MTA ratings, we assessed the impact of including images from a wider distribution during training had on performance in external memory clinic data. Our results showed that our model generalized well to data sets acquired with similar protocols as the training data, but substantially worse in clinical cohorts with visibly different tissue contrasts in the images. This implies that future DL studies investigating performance in out-of-distribution (OOD) MRI data need to assess multiple external cohorts for reliable results. Further, by including data from a wider range of scanners and protocols the performance improved in OOD data, which suggests that more heterogeneous training data makes the model generalize better. To conclude, this is the most comprehensive study to date investigating the domain shift in deep learning on MRI data, and we advocate rigorous evaluation of DL models on clinical data prior to being certified for deployment.
Abstract:Quantifying the degree of atrophy is done clinically by neuroradiologists following established visual rating scales. For these assessments to be reliable the rater requires substantial training and experience, and even then the rating agreement between two radiologists is not perfect. We have developed a model we call AVRA (Automatic Visual Ratings of Atrophy) based on machine learning methods and trained on 2350 visual ratings made by an experienced neuroradiologist. It provides fast and automatic ratings for Scheltens' scale of medial temporal atrophy (MTA), the frontal subscale of Pasquier's Global Cortical Atrophy (GCA-F) scale, and Koedam's scale of Posterior Atrophy (PA). We demonstrate substantial inter-rater agreement between AVRA's and a neuroradiologist ratings with Cohen's weighted kappa values of $\kappa_w$ = 0.74/0.72 (MTA left/right), $\kappa_w$ = 0.62 (GCA-F) and $\kappa_w$ = 0.74 (PA), with an inherent intra-rater agreement of $\kappa_w$ = 1. We conclude that automatic visual ratings of atrophy can potentially have great clinical and scientific value, and aim to present AVRA as a freely available toolbox.