*: shared first/last authors
Abstract:Unique identification of multiple sclerosis (MS) white matter lesions (WMLs) is important to help characterize MS progression. WMLs are routinely identified from magnetic resonance images (MRIs) but the resultant total lesion load does not correlate well with EDSS; whereas mean unique lesion volume has been shown to correlate with EDSS. Our approach builds on prior work by incorporating Hessian matrix computation from lesion probability maps before using the random walker algorithm to estimate the volume of each unique lesion. Synthetic images demonstrate our ability to accurately count the number of lesions present. The takeaways, are: 1) that our method correctly identifies all lesions including many that are missed by previous methods; 2) we can better separate confluent lesions; and 3) we can accurately capture the total volume of WMLs in a given probability map. This work will allow new more meaningful statistics to be computed from WMLs in brain MRIs
Abstract:Automatic magnetic resonance (MR) image processing pipelines are widely used to study people with multiple sclerosis (PwMS), encompassing tasks such as lesion segmentation and brain parcellation. However, the presence of lesion often complicates these analysis, particularly in brain parcellation. Lesion filling is commonly used to mitigate this issue, but existing lesion filling algorithms often fall short in accurately reconstructing realistic lesion-free images, which are vital for consistent downstream analysis. Additionally, the performance of lesion segmentation algorithms is often limited by insufficient data with lesion delineation as training labels. In this paper, we propose a novel approach leveraging Denoising Diffusion Implicit Models (DDIMs) for both MS lesion filling and synthesis based on image inpainting. Our modified DDIM architecture, once trained, enables both MS lesion filing and synthesis. Specifically, it can generate lesion-free T1-weighted or FLAIR images from those containing lesions; Or it can add lesions to T1-weighted or FLAIR images of healthy subjects. The former is essential for downstream analyses that require lesion-free images, while the latter is valuable for augmenting training datasets for lesion segmentation tasks. We validate our approach through initial experiments in this paper and demonstrate promising results in both lesion filling and synthesis, paving the way for future work.
Abstract:Affine registration plays a crucial role in PET/CT imaging, where aligning PET with CT images is challenging due to their respective functional and anatomical representations. Despite the significant promise shown by recent deep learning (DL)-based methods in various medical imaging applications, their application to multi-modal PET/CT affine registration remains relatively unexplored. This study investigates a DL-based approach for PET/CT affine registration. We introduce a novel method using Parzen windowing to approximate the correlation ratio, which acts as the image similarity measure for training DNNs in multi-modal registration. Additionally, we propose a multi-scale, instance-specific optimization scheme that iteratively refines the DNN-generated affine parameters across multiple image resolutions. Our method was evaluated against the widely used mutual information metric and a popular optimization-based technique from the ANTs package, using a large public FDG-PET/CT dataset with synthetic affine transformations. Our approach achieved a mean Dice Similarity Coefficient (DSC) of 0.870, outperforming the compared methods and demonstrating its effectiveness in multi-modal PET/CT image registration.
Abstract:Magnetic resonance (MR) imaging is commonly used in the clinical setting to non-invasively monitor the body. There exists a large variability in MR imaging due to differences in scanner hardware, software, and protocol design. Ideally, a processing algorithm should perform robustly to this variability, but that is not always the case in reality. This introduces a need for image harmonization to overcome issues of domain shift when performing downstream analysis such as segmentation. Most image harmonization models focus on acquisition parameters such as inversion time or repetition time, but they ignore an important aspect in MR imaging -- resolution. In this paper, we evaluate the impact of image resolution on harmonization using a pretrained harmonization algorithm. We simulate 2D acquisitions of various slice thicknesses and gaps from 3D acquired, 1mm3 isotropic MR images and demonstrate how the performance of a state-of-the-art image harmonization algorithm varies as resolution changes. We discuss the most ideal scenarios for image resolution including acquisition orientation when 3D imaging is not available, which is common for many clinical scanners. Our results show that harmonization on low-resolution images does not account for acquisition resolution and orientation variations. Super-resolution can be used to alleviate resolution variations but it is not always used. Our methodology can generalize to help evaluate the impact of image acquisition resolution for multiple tasks. Determining the limits of a pretrained algorithm is important when considering preprocessing steps and trust in the results.
Abstract:Deformable image registration establishes non-linear spatial correspondences between fixed and moving images. Deep learning-based deformable registration methods have been widely studied in recent years due to their speed advantage over traditional algorithms as well as their better accuracy. Most existing deep learning-based methods require neural networks to encode location information in their feature maps and predict displacement or deformation fields though convolutional or fully connected layers from these high-dimensional feature maps. In this work, we present Vector Field Attention (VFA), a novel framework that enhances the efficiency of the existing network design by enabling direct retrieval of location correspondences. VFA uses neural networks to extract multi-resolution feature maps from the fixed and moving images and then retrieves pixel-level correspondences based on feature similarity. The retrieval is achieved with a novel attention module without the need of learnable parameters. VFA is trained end-to-end in either a supervised or unsupervised manner. We evaluated VFA for intra- and inter-modality registration and for unsupervised and semi-supervised registration using public datasets, and we also evaluated it on the Learn2Reg challenge. Experimental results demonstrate the superior performance of VFA compared to existing methods. The source code of VFA is publicly available at https://github.com/yihao6/vfa/.
Abstract:Understanding the uncertainty inherent in deep learning-based image registration models has been an ongoing area of research. Existing methods have been developed to quantify both transformation and appearance uncertainties related to the registration process, elucidating areas where the model may exhibit ambiguity regarding the generated deformation. However, our study reveals that neither uncertainty effectively estimates the potential errors when the registration model is used for label propagation. Here, we propose a novel framework to concurrently estimate both the epistemic and aleatoric segmentation uncertainties for image registration. To this end, we implement a compact deep neural network (DNN) designed to transform the appearance discrepancy in the warping into aleatoric segmentation uncertainty by minimizing a negative log-likelihood loss function. Furthermore, we present epistemic segmentation uncertainty within the label propagation process as the entropy of the propagated labels. By introducing segmentation uncertainty along with existing methods for estimating registration uncertainty, we offer vital insights into the potential uncertainties at different stages of image registration. We validated our proposed framework using publicly available datasets, and the results prove that the segmentation uncertainties estimated with the proposed method correlate well with errors in label propagation, all while achieving superior registration performance.
Abstract:Deep learning (DL) has led to significant improvements in medical image synthesis, enabling advanced image-to-image translation to generate synthetic images. However, DL methods face challenges such as domain shift and high demands for training data, limiting their generalizability and applicability. Historically, image synthesis was also carried out using deformable image registration (DIR), a method that warps moving images of a desired modality to match the anatomy of a fixed image. However, concerns about its speed and accuracy led to its decline in popularity. With the recent advances of DL-based DIR, we now revisit and reinvigorate this line of research. In this paper, we propose a fast and accurate synthesis method based on DIR. We use the task of synthesizing a rare magnetic resonance (MR) sequence, white matter nulled (WMn) T1-weighted (T1-w) images, to demonstrate the potential of our approach. During training, our method learns a DIR model based on the widely available MPRAGE sequence, which is a cerebrospinal fluid nulled (CSFn) T1-w inversion recovery gradient echo pulse sequence. During testing, the trained DIR model is first applied to estimate the deformation between moving and fixed CSFn images. Subsequently, this estimated deformation is applied to align the paired WMn counterpart of the moving CSFn image, yielding a synthetic WMn image for the fixed CSFn image. Our experiments demonstrate promising results for unsupervised image synthesis using DIR. These findings highlight the potential of our technique in contexts where supervised synthesis methods are constrained by limited training data.
Abstract:Magnetic Resonance Imaging with tagging (tMRI) has long been utilized for quantifying tissue motion and strain during deformation. However, a phenomenon known as tag fading, a gradual decrease in tag visibility over time, often complicates post-processing. The first contribution of this study is to model tag fading by considering the interplay between $T_1$ relaxation and the repeated application of radio frequency (RF) pulses during serial imaging sequences. This is a factor that has been overlooked in prior research on tMRI post-processing. Further, we have observed an emerging trend of utilizing raw tagged MRI within a deep learning-based (DL) registration framework for motion estimation. In this work, we evaluate and analyze the impact of commonly used image similarity objectives in training DL registrations on raw tMRI. This is then compared with the Harmonic Phase-based approach, a traditional approach which is claimed to be robust to tag fading. Our findings, derived from both simulated images and an actual phantom scan, reveal the limitations of various similarity losses in raw tMRI and emphasize caution in registration tasks where image intensity changes over time.
Abstract:Anisotropic low-resolution (LR) magnetic resonance (MR) images are fast to obtain but hinder automated processing. We propose to use denoising diffusion probabilistic models (DDPMs) to super-resolve these 2D-acquired LR MR slices. This paper introduces AniRes2D, a novel approach combining DDPM with a residual prediction for 2D super-resolution (SR). Results demonstrate that AniRes2D outperforms several other DDPM-based models in quantitative metrics, visual quality, and out-of-domain evaluation. We use a trained AniRes2D to super-resolve 3D volumes slice by slice, where comparative quantitative results and reduced skull aliasing are achieved compared to a recent state-of-the-art self-supervised 3D super-resolution method. Furthermore, we explored the use of noise conditioning augmentation (NCA) as an alternative augmentation technique for DDPM-based SR models, but it was found to reduce performance. Our findings contribute valuable insights to the application of DDPMs for SR of anisotropic MR images.
Abstract:Automatic multiple sclerosis (MS) lesion segmentation using multi-contrast magnetic resonance (MR) images provides improved efficiency and reproducibility compared to manual delineation. Current state-of-the-art automatic MS lesion segmentation methods utilize modified U-Net-like architectures. However, in the literature, dedicated architecture modifications were always required to maximize their performance. In addition, the best-performing methods have not proven to be generalizable to diverse test datasets with contrast variations and image artifacts. In this work, we developed an accurate and generalizable MS lesion segmentation model using the well-known U-Net architecture without further modification. A novel test-time self-ensembled lesion fusion strategy is proposed that not only achieved the best performance using the ISBI 2015 MS segmentation challenge data but also demonstrated robustness across various self-ensemble parameter choices. Moreover, equipped with instance normalization rather than batch normalization widely used in literature, the model trained on the ISBI challenge data generalized well on clinical test datasets from different scanners.