Abstract:Despite the recent success of image generation and style transfer with Generative Adversarial Networks (GANs), hair synthesis and style transfer remain challenging due to the shape and style variability of human hair in in-the-wild conditions. The current state-of-the-art hair synthesis approaches struggle to maintain global composition of the target style and cannot be used in real-time applications due to their high running costs on high-resolution portrait images. Therefore, We propose a novel hairstyle transfer method, called EHGAN, which reduces computational costs to enable real-time processing while improving the transfer of hairstyle with better global structure compared to the other state-of-the-art hair synthesis methods. To achieve this goal, we train an encoder and a low-resolution generator to transfer hairstyle and then, increase the resolution of results with a pre-trained super-resolution model. We utilize Adaptive Instance Normalization (AdaIN) and design our novel Hair Blending Block (HBB) to obtain the best performance of the generator. EHGAN needs around 2.7 times and over 10,000 times less time consumption than the state-of-the-art MichiGAN and LOHO methods respectively while obtaining better photorealism and structural similarity to the desired style than its competitors.
Abstract:Recent successes in generative modeling have accelerated studies on this subject and attracted the attention of researchers. One of the most important methods used to achieve this success is Generative Adversarial Networks (GANs). It has many application areas such as; virtual reality (VR), augmented reality (AR), super resolution, image enhancement. Despite the recent advances in hair synthesis and style transfer using deep learning and generative modelling, due to the complex nature of hair still contains unsolved challenges. The methods proposed in the literature to solve this problem generally focus on making high-quality hair edits on images. In this thesis, a generative adversarial network method is proposed to solve the hair synthesis problem. While developing this method, it is aimed to achieve real-time hair synthesis while achieving visual outputs that compete with the best methods in the literature. The proposed method was trained with the FFHQ dataset and then its results in hair style transfer and hair reconstruction tasks were evaluated. The results obtained in these tasks and the operating time of the method were compared with MichiGAN, one of the best methods in the literature. The comparison was made at a resolution of 128x128. As a result of the comparison, it has been shown that the proposed method achieves competitive results with MichiGAN in terms of realistic hair synthesis, and performs better in terms of operating time.