Authors: Apavou Clément & Belkada Younes
From left to right - Images generated using styleGAN and the boundaries Bald, Blond, Heavy_Makeup, Gray_Hair
This the the repository to a project related to the Introduction to Numerical Imaging (i.e, Introduction à l'Imagerie Numérique in French), given by the MVA Masters program at ENS-Paris Saclay. The project and repository is based on the work from Shen et al., and fully supports their codebase. You can refer to the original README) to reproduce their results.
- Introduction
- 🔥 Additional features
- 🔨 Training an attribute detection classifier
- ⭐ Generate images using StyleGAN & StyleGAN2 & StyleGAN3
- ✏️ Edit generated images
In this repository, we propose an approach, termed as InterFaceGAN , for semantic face editing based on the work from Shen et al. Specifically, we leverage the ideas from the previous work, by applying the method for new face attributes, and also for StyleGAN3. We qualitatively explain that moving the latent vector toward the trained boundaries leads in many cases to keeping the semantic information of the generated images (by preserving its local structure) and modify the desired attribute, thus helps to demonstrate the disentangled property of the styleGANs.
- Supports StyleGAN2 & StyleGAN3 on the classic attributes
- New attributes (Bald, Gray hair, Blond hair, Earings, ...) for:
- StyleGAN
- StyleGAN2
- StyleGAN3
- Supports face generation using StyleGAN3 & StyleGAN2
The list of new features can be found on our attributes detection classifier repository
We use a ViT-base model to train an attribute detection classifier, please refer to our classification code if you want to test it for new models. Once you retrieve the trained SVM from this repo, you can directly move them in this repo and use them.
We did not changed anything to the structure of the old repository, please refer to the previous README. For StyleGAN
We use the styleGAN trained on ffhq for our experiments, if you want to reproduce them, run:
wget -P interfacegan/models/pretrain https://www.dropbox.com/s/qyv37eaobnow7fu/stylegan_ffhq.pth
We use the styleGAN2 trained on ffhq for our experiments, if you want to reproduce them, run:
wget -P models/pretrain https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan2/versions/1/files/stylegan2-ffhq-1024x1024.pkl
We use the styleGAN3 trained on ffhq for our experiments, if you want to reproduce them, run:
wget -P models/pretrain https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/stylegan3-t-ffhq-1024x1024.pkl
The pretrained model should be copied at models/pretrain
. If not, move the pretrained model file at this directory.
If you want to generate 10 images using styleGAN3 downloaded before, run:
python generate_data.py -m stylegan3_ffhq -o output_stylegan3 -n 10
The arguments are exactly the same as the arguments from the original repository, the code supports the flag -m stylegan3_ffhq
for styleGAN3 and -m stylegan3_ffhq
for styleGAN2.
You can edit the generated images using our trained boundaries! Depending on the generator you want to use, make sure that you have downloaded the right model and put them into models/pretrain
.
Please refer to our interactive google colab notebook to play with our models by clicking the following badge:
Example of generated images using StyleGAN and moving the images towards the direction of the attribute grey hair:
Example of generated images using StyleGAN2 and moving the images towards the opposite direction of the attribute young:
Example of generated images using StyleGAN3 and moving the images towards the attribute beard:
This repository is based on the original interfacegan code
@inproceedings{shen2020interpreting,
title = {Interpreting the Latent Space of GANs for Semantic Face Editing},
author = {Shen, Yujun and Gu, Jinjin and Tang, Xiaoou and Zhou, Bolei},
booktitle = {CVPR},
year = {2020}
}
@article{shen2020interfacegan,
title = {InterFaceGAN: Interpreting the Disentangled Face Representation Learned by GANs},
author = {Shen, Yujun and Yang, Ceyuan and Tang, Xiaoou and Zhou, Bolei},
journal = {TPAMI},
year = {2020}
}