Abstract:In this paper, a nonlinear approach to separate different motion types in video data is proposed. This is particularly relevant in dynamic medical imaging (e.g. PET, MRI), where patient motion poses a significant challenge due to its effects on the image reconstruction as well as for its subsequent interpretation. Here, a new method is proposed where dynamic images are represented as the forward mapping of a sequence of latent variables via a generator neural network. The latent variables are structured so that temporal variations in the data are represented via dynamic latent variables, which are independent of static latent variables characterizing the general structure of the frames. In particular, different kinds of motion are also characterized independently of each other via latent space disentanglement using one-dimensional prior information on all but one of the motion types. This representation allows to freeze any selection of motion types, and to obtain accurate independent representations of other dynamics of interest. Moreover, the proposed algorithm is training-free, i.e., all the network parameters are learned directly from a single video. We illustrate the performance of this method on phantom and real-data MRI examples, where we successfully separate respiratory and cardiac motion.
Abstract:Various problems in computer vision and medical imaging can be cast as inverse problems. A frequent method for solving inverse problems is the variational approach, which amounts to minimizing an energy composed of a data fidelity term and a regularizer. Classically, handcrafted regularizers are used, which are commonly outperformed by state-of-the-art deep learning approaches. In this work, we combine the variational formulation of inverse problems with deep learning by introducing the data-driven general-purpose total deep variation regularizer. In its core, a convolutional neural network extracts local features on multiple scales and in successive blocks. This combination allows for a rigorous mathematical analysis including an optimal control formulation of the training problem in a mean-field setting and a stability analysis with respect to the initial values and the parameters of the regularizer. In addition, we experimentally verify the robustness against adversarial attacks and numerically derive upper bounds for the generalization error. Finally, we achieve state-of-the-art results for numerous imaging tasks.
Abstract:Diverse inverse problems in imaging can be cast as variational problems composed of a task-specific data fidelity term and a regularization term. In this paper, we propose a novel learnable general-purpose regularizer exploiting recent architectural design patterns from deep learning. We cast the learning problem as a discrete sampled optimal control problem, for which we derive the adjoint state equations and an optimality condition. By exploiting the variational structure of our approach, we perform a sensitivity analysis with respect to the learned parameters obtained from different training datasets. Moreover, we carry out a nonlinear eigenfunction analysis, which reveals interesting properties of the learned regularizer. We show state-of-the-art performance for classical image restoration and medical image reconstruction problems.
Abstract:We investigate a well-known phenomenon of variational approaches in image processing, where typically the best image quality is achieved when the gradient flow process is stopped before converging to a stationary point. This paradox originates from a tradeoff between optimization and modelling errors of the underlying variational model and holds true even if deep learning methods are used to learn highly expressive regularizers from data. In this paper, we take advantage of this paradox and introduce an optimal stopping time into the gradient flow process, which in turn is learned from data by means of an optimal control approach. As a result, we obtain highly efficient numerical schemes that achieve competitive results for image denoising and image deblurring. A nonlinear spectral analysis of the gradient of the learned regularizer gives enlightening insights about the different regularization properties.