The deep learning-based tomographic image reconstruction methods have been attracting much attention among these years. The sparse-view data reconstruction is one of typical underdetermined inverse problems, how to reconstruct high-quality CT images from dozens of projections is still a challenge in practice. To address this challenge, in this article we proposed a Multi-domain Integrative Swin Transformer network (MIST-net). First, the proposed MIST-net incorporated lavish domain features from data, residual-data, image, and residual-image using flexible network architectures. Here, the residual-data and residual-image domains network components can be considered as data consistency module to eliminate interpolation errors in both residual data and image domains, and then further retain image details. Second, to detect image features and further protect image edge, the trainable edge enhancement filter was incorporated into sub-network to improve encode-decode ability. Third, with classical Swin Transformer, we further designed a high-quality reconstruction transformer (i.e., Recformer) to improve reconstruction performance. Recformer inherited the power of Swin transformer to capture global and local features of reconstructed image. The experiments on numerical datasets with 48 views demonstrated our proposed MIST-net provided higher reconstructed image quality with small feature recovery and edge protection than other competitors including advanced unrolled networks. The trained network was transferred to real cardiac CT dataset to further validate the advantages as well as good robustness of our MIST-net in clinical applications.