Abstract:In certain brain volumetric studies, synthetic T1-weighted magnetization-prepared rapid gradient-echo (MP-RAGE) contrast, derived from quantitative T1 MRI (T1-qMRI), proves highly valuable due to its clear white/gray matter boundaries for brain segmentation. However, generating synthetic MP-RAGE (syn-MP-RAGE) typically requires pairs of high-quality, artifact-free, multi-modality inputs, which can be challenging in retrospective studies, where missing or corrupted data is common. To overcome this limitation, our research explores the feasibility of employing a deep learning-based approach to synthesize syn-MP-RAGE contrast directly from a single channel turbo spin-echo (TSE) input, renowned for its resistance to metal artifacts. We evaluated this deep learning-based synthetic MP-RAGE (DL-Syn-MPR) on 31 non-artifact and 11 metal-artifact subjects. The segmentation results, measured by the Dice Similarity Coefficient (DSC), consistently achieved high agreement (DSC values above 0.83), indicating a strong correlation with reference segmentations, with lower input requirements. Also, no significant difference in segmentation performance was observed between the artifact and non-artifact groups.
Abstract:Image restoration is a typical ill-posed problem, and it contains various tasks. In the medical imaging field, an ill-posed image interrupts diagnosis and even following image processing. Both traditional iterative and up-to-date deep networks have attracted much attention and obtained a significant improvement in reconstructing satisfying images. This study combines their advantages into one unified mathematical model and proposes a general image restoration strategy to deal with such problems. This strategy consists of two modules. First, a novel generative adversarial net(GAN) with WGAN-GP training is built to recover image structures and subtle details. Then, a deep iteration module promotes image quality with a combination of pre-trained deep networks and compressed sensing algorithms by ADMM optimization. (D)eep (I)teration module suppresses image artifacts and further recovers subtle image details, (A)ssisted by (M)ulti-level (O)bey-pixel feature extraction networks (D)iscriminator to recover general structures. Therefore, the proposed strategy is named DIAMOND.
Abstract:Tomographic image reconstruction with deep learning is an emerging field of applied artificial intelligence but a recent study reveals that deep reconstruction networks, such as well-known AUTOMAP, are unstable for computed tomography (CT) and magnetic resonance imaging (MRI). Specifically, three kinds of instabilities were identified: (1) strong output artefacts from tiny perturbation, (2) poor detection of small features, and (3) decreased performance with increased input data. These instabilities are believed to be from lacking kernel awareness and nontrivial to overcome, but compressed sensing (CS) reconstruction was reported to be stable due to its kernel awareness. Since deep reconstruction may potentially become the main driving force to achieve better image quality, stabilizing deep reconstruction networks is an urgent challenge. Here we propose an Analytic, Compressive, Iterative Deep (ACID) network to fundamentally address this challenge. Instead of only using deep learning or compressed sensing, ACID consists of four modules including deep reconstruction, CS, analytic mapping, and iterative refinement. In our experiments, ACID eliminated all three kinds of instabilities and significantly improved image quality relative to the methods in the aforementioned PNAS study. ACID is only an example of integrating diverse algorithmic ingredients but it has clearly demonstrated that data-driven reconstruction can be stabilized to outperform reconstruction using CS alone. The power of ACID comes from a unique combination of a deep reconstruction network trained on big data, CS via advanced optimization, and iterative refinement that stabilizes the whole workflow. We anticipate that this integrative closed-loop data driven approach will add great value to clinical and other applications.