Abstract:We propose a multi-scale deep energy model that is strongly convex in the local neighbourhood around the data manifold to represent its probability density, with application in inverse problems. In particular, we represent the negative log-prior as a multi-scale energy model parameterized by a Convolutional Neural Network (CNN). We restrict the gradient of the CNN to be locally monotone, which constrains the model as a Locally Convex Multi-Scale Energy (LC-MuSE). We use the learned energy model in image-based inverse problems, where the formulation offers several desirable properties: i) uniqueness of the solution, ii) convergence guarantees to a minimum of the inverse problem, and iii) robustness to input perturbations. In the context of parallel Magnetic Resonance (MR) image reconstruction, we show that the proposed method performs better than the state-of-the-art convex regularizers, while the performance is comparable to plug-and-play regularizers and end-to-end trained methods.
Abstract:Three-dimensional (3D) multi-slab acquisition is a technique frequently employed in high-resolution diffusion-weighted MRI in order to achieve the best signal-to-noise ratio (SNR) efficiency. However, this technique is limited by slab boundary artifacts that cause intensity fluctuations and aliasing between slabs which reduces the accuracy of anatomical imaging. Addressing this issue is crucial for advancing diffusion MRI quality and making high-resolution imaging more feasible for clinical and research applications. In this work, we propose a regularized slab profile encoding (PEN) method within a Plug-and-Play ADMM framework, incorporating multi-scale energy (MuSE) regularization to effectively improve the slab combined reconstruction. Experimental results demonstrate that the proposed method significantly improves image quality compared to non-regularized and TV-regularized PEN approaches. The regularized PEN framework provides a more robust and efficient solution for high-resolution 3D diffusion MRI, potentially enabling clearer, more reliable anatomical imaging across various applications.
Abstract:The acquisition of 3D multicontrast MRI data with good isotropic spatial resolution is challenged by lengthy scan times. In this work, we introduce a CNN-based multiscale energy model to learn the joint probability distribution of the multi-contrast images. The joint recovery of the contrasts from undersampled data is posed as a maximum a posteriori estimation scheme, where the learned energy serves as the prior. We use a majorize-minimize algorithm to solve the optimization scheme. The proposed model leverages the redundancies across different contrasts to improve image fidelity. The proposed scheme is observed to preserve fine details and contrast, offering sharper reconstructions compared to reconstruction methods that independently recover the contrasts. While we focus on 3D MPNRAGE acquisitions in this work, the proposed approach is generalizable to arbitrary multi-contrast settings.
Abstract:End-to-End (E2E) unrolled optimization frameworks show promise for Magnetic Resonance (MR) image recovery, but suffer from high memory usage during training. In addition, these deterministic approaches do not offer opportunities for sampling from the posterior distribution. In this paper, we introduce a memory-efficient approach for E2E learning of the posterior distribution. We represent this distribution as the combination of a data-consistency-induced likelihood term and an energy model for the prior, parameterized by a Convolutional Neural Network (CNN). The CNN weights are learned from training data in an E2E fashion using maximum likelihood optimization. The learned model enables the recovery of images from undersampled measurements using the Maximum A Posteriori (MAP) optimization. In addition, the posterior model can be sampled to derive uncertainty maps about the reconstruction. Experiments on parallel MR image reconstruction show that our approach performs comparable to the memory-intensive E2E unrolled algorithm, performs better than its memory-efficient counterpart, and can provide uncertainty maps. Our framework paves the way towards MR image reconstruction in 3D and higher dimensions
Abstract:The recovery of magnetic resonance (MR) images from undersampled measurements is a key problem that has seen extensive research in recent years. Unrolled approaches, which rely on end-to-end training of convolutional neural network (CNN) blocks within iterative reconstruction algorithms, offer state-of-the-art performance. These algorithms require a large amount of memory during training, making them difficult to employ in high-dimensional applications. Deep equilibrium (DEQ) models and the recent monotone operator learning (MOL) approach were introduced to eliminate the need for unrolling, thus reducing the memory demand during training. Both approaches require a Lipschitz constraint on the network to ensure that the forward and backpropagation iterations converge. Unfortunately, the constraint often results in reduced performance compared to unrolled methods. The main focus of this work is to relax the constraint on the CNN block in two different ways. Inspired by convex-non-convex regularization strategies, we now impose the monotone constraint on the sum of the gradient of the data term and the CNN block, rather than constrain the CNN itself to be a monotone operator. This approach enables the CNN to learn possibly non-monotone score functions, which can translate to improved performance. In addition, we only restrict the operator to be monotone in a local neighborhood around the image manifold. Our theoretical results show that the proposed algorithm is guaranteed to converge to the fixed point and that the solution is robust to input perturbations, provided that it is initialized close to the true solution. Our empirical results show that the relaxed constraints translate to improved performance and that the approach enjoys robustness to input perturbations similar to MOL.
Abstract:We introduce a multi-scale energy formulation for plug and play (PnP) image recovery. The main highlight of the proposed framework is energy formulation, where the log prior of the distribution is learned by a convolutional neural network (CNN) module. The energy formulation enables us to introduce optimization algorithms with guaranteed convergence, even when the CNN module is not constrained as a contraction. Current PnP methods, which do not often have well-defined energy formulations, require a contraction constraint that restricts their performance in challenging applications. The energy and the corresponding score function are learned from reference data using denoising score matching, where the noise variance serves as a smoothness parameter that controls the shape of the learned energy function. We introduce a multi-scale optimization strategy, where a sequence of smooth approximations of the true prior is used in the optimization process. This approach improves the convergence of the algorithm to the global minimum, which translates to improved performance. The preliminary results in the context of MRI show that the multi-scale energy PnP framework offers comparable performance to unrolled algorithms. Unlike unrolled methods, the proposed PnP approach can work with arbitrary forward models, making it an easier option for clinical deployment. In addition, the training of the proposed model is more efficient from a memory and computational perspective, making it attractive in large-scale (e.g., 4D) settings.