Abstract:We propose an unsupervised deep learning algorithm for the motion-compensated reconstruction of 5D cardiac MRI data from 3D radial acquisitions. Ungated free-breathing 5D MRI simplifies the scan planning, improves patient comfort, and offers several clinical benefits over breath-held 2D exams, including isotropic spatial resolution and the ability to reslice the data to arbitrary views. However, the current reconstruction algorithms for 5D MRI take very long computational time, and their outcome is greatly dependent on the uniformity of the binning of the acquired data into different physiological phases. The proposed algorithm is a more data-efficient alternative to current motion-resolved reconstructions. This motion-compensated approach models the data in each cardiac/respiratory bin as Fourier samples of the deformed version of a 3D image template. The deformation maps are modeled by a convolutional neural network driven by the physiological phase information. The deformation maps and the template are then jointly estimated from the measured data. The cardiac and respiratory phases are estimated from 1D navigators using an auto-encoder. The proposed algorithm is validated on 5D bSSFP datasets acquired from two subjects.
Abstract:Free-breathing cardiac MRI schemes are emerging as competitive alternatives to breath-held cine MRI protocols, enabling applicability to pediatric and other population groups that cannot hold their breath. Because the data from the slices are acquired sequentially, the cardiac/respiratory motion patterns may be different for each slice; current free-breathing approaches perform independent recovery of each slice. In addition to not being able to exploit the inter-slice redundancies, manual intervention or sophisticated post-processing methods are needed to align the images post-recovery for quantification. To overcome these challenges, we propose an unsupervised variational deep manifold learning scheme for the joint alignment and reconstruction of multislice dynamic MRI. The proposed scheme jointly learns the parameters of the deep network as well as the latent vectors for each slice, which capture the motion-induced dynamic variations, from the k-t space data of the specific subject. The variational framework minimizes the non-uniqueness in the representation, thus offering improved alignment and reconstructions.