In this paper, we propose a framework to perform Generative Adversarial Network (GAN) inversion using semantic segmentation map to invert input image into the GAN latent space. Generally, it is still difficult to invert semantic information of input image into GAN latent space. In particular, conventional GAN inversion methods usually suffer from inverting accurate semantic information such as shape of glasses and hairstyle. To this end, we propose a framework that uses the semantic segmentation map of the real image to guide the latent space corresponding to feature map with coarse resolution in the Style-GANv2. Experimental results show that our proposed method generates more accurate images and is possible of detailed editing of input images with a variety of semantic information compared with previous GAN inversion methods.
ACKNOWLEDGEMENT This work has been supported by the MSIT(Ministry of Science and ICT), Korea, under the ITRC(Information Technology Research Center) support program(IITP-2022-2018-0-01424) supervised by the IITP(Institute for Information communications Technology Promotion).