As the segmentation labels are scarce, extensive researches have been conducted to train segmentation networks without labels or with only limited labels. In particular, domain adaptation, self-supervised learning, and teacher-student architecture have been introduced to distill knowledge from various tasks to improve the segmentation performance. However, these approaches appear different from each other, so it is not clear how these seemingly different approaches can be combined for better performance. Inspired by the recent StarGANv2 for multi-domain image translation, here we propose a novel segmentation framework via AdaIN-based knowledge distillation, where a single generator with AdaIN layers is trained along with the AdaIN code generator and style encoder so that the generator can perform both domain adaptation and segmentation. Specifically, our framework is designed to deal with difficult situations in chest X-ray (CXR) segmentation tasks where segmentation masks are only available for normal CXR data, but the trained model should be applied for both normal and abnormal CXR images. Since a single generator is used for abnormal to normal domain conversion and segmentation by simply changing the AdaIN codes, the generator can synergistically learn the common features to improve segmentation performance. Experimental results using CXR data confirm that the trained network can achieve the state-of-the art segmentation performance for both normal and abnormal CXR images.