Abstract:Microscopy images acquired by multiple camera lenses or sensors in biological experiments offer a comprehensive understanding of the objects from diverse aspects. However, setups for multiple microscopes raise the possibility of misalignment of identical target features through different modalities. Thus, multimodal image registration is essential. In this work, we employed previous successes in biological image translation (XAcGAN) and mono-modal image registration (RoTIR) and created a deep-learning-based model, Dual-Domain RoTIR (DD_RoTIR), to address the challenges. However, it is believed that GAN-based translation models are inadequate for multimodal image registration. We facilitated the registration utilizing the feature-matching algorithm based on Transformers and rotation equivariant networks. Furthermore, hierarchical feature-matching was employed as multimodal image registration is more challenging. Results show the DD_RoTIR model presents good applicability and robustness in multiple microscopy image datasets.
Abstract:Image registration is an essential process for aligning features of interest from multiple images. With the recent development of deep learning techniques, image registration approaches have advanced to a new level. In this work, we present 'Rotation-Equivariant network and Transformers for Image Registration' (RoTIR), a deep-learning-based method for the alignment of fish scale images captured by light microscopy. This approach overcomes the challenge of arbitrary rotation and translation detection, as well as the absence of ground truth data. We employ feature-matching approaches based on Transformers and general E(2)-equivariant steerable CNNs for model creation. Besides, an artificial training dataset is employed for semi-supervised learning. Results show RoTIR successfully achieves the goal of fish scale image registration.