Abstract:The deformable registration of images of different modalities, essential in many medical imaging applications, remains challenging. The main challenge is developing a robust measure for image overlap despite the compared images capturing different aspects of the underlying tissue. Here, we explore similarity metrics based on functional dependence between intensity values of registered images. Although functional dependence is too restrictive on the global scale, earlier work has shown competitive performance in deformable registration when such measures are applied over small enough contexts. We confirm this finding and further develop the idea by modeling local functional dependence via the linear basis function model with the basis functions learned jointly with the deformation. The measure can be implemented via convolutions, making it efficient to compute on GPUs. We release the method as an easy-to-use tool and show good performance on three datasets compared to well-established baseline and earlier functional dependence-based methods.
Abstract:Deep learning based deformable medical image registration methods have emerged as a strong alternative for classical iterative registration methods. However, the currently published deep learning methods do not fulfill as strict symmetry properties with respect to the inputs as some classical registration methods, for which the registration outcome is the same regardless of the order of the inputs. While some deep learning methods label themselves as symmetric, they are either symmetric only a priori, which does not guarantee symmetry for any given input pair, or they do not generate accurate explicit inverses. In this work, we propose a novel registration architecture which by construction makes the registration network anti-symmetric with respect to its inputs. We demonstrate on two datasets that the proposed method achieves state-of-the-art results in terms of registration accuracy and that the generated deformations have accurate explicit inverses.
Abstract:Cross-modality image synthesis is an active research topic with multiple medical clinically relevant applications. Recently, methods allowing training with paired but misaligned data have started to emerge. However, no robust and well-performing methods applicable to a wide range of real world data sets exist. In this work, we propose a generic solution to the problem of cross-modality image synthesis with paired but non-aligned data by introducing new deformation equivariance encouraging loss functions. The method consists of joint training of an image synthesis network together with separate registration networks and allows adversarial training conditioned on the input even with misaligned data. The work lowers the bar for new clinical applications by allowing effortless training of cross-modality image synthesis networks for more difficult data sets and opens up opportunities for the development of new generic learning based cross-modality registration algorithms.