Previous methods on multimodal groupwise registration typically require certain highly specialized similarity metrics with restrained applicability. In this work, we instead propose a general framework which formulates groupwise registration as a procedure of hierarchical Bayesian inference. Here, the imaging process of multimodal medical images, including shape transition and appearance variation, is characterized by a disentangled variational auto-encoder. To this end, we propose a novel variational posterior and network architecture that facilitate joint learning of the common structural representation and the desired spatial correspondences. The performance of the proposed model was validated on two publicly available multimodal datasets, i.e., BrainWeb and MS-CMR of the heart. Results have demonstrated the efficacy of our framework in realizing multimodal groupwise registration in an end-to-end fashion.