This repo is the official implementation of "StyleSwin: Transformer-based GAN for High-resolution Image Generation" (CVPR 2022).
By Bowen Zhang, Shuyang Gu, Bo Zhang, Jianmin Bao, Dong Chen, Fang Wen, Yong Wang and Baining Guo.
Despite the tantalizing success in a broad of vision tasks, transformers have not yet demonstrated on-par ability as ConvNets in high-resolution image generative modeling. In this paper, we seek to explore using pure transformers to build a generative adversarial network for high-resolution image synthesis. To this end, we believe that local attention is crucial to strike the balance between computational efficiency and modeling capacity. Hence, the proposed generator adopts Swin transformer in a style-based architecture. To achieve a larger receptive field, we propose double attention which simultaneously leverages the context of the local and the shifted windows, leading to improved generation quality. Moreover, we show that offering the knowledge of the absolute position that has been lost in window-based transformers greatly benefits the generation quality. The proposed StyleSwin is scalable to high resolutions, with both the coarse geometry and fine structures benefit from the strong expressivity of transformers. However, blocking artifacts occur during high-resolution synthesis because performing the local attention in a block-wise manner may break the spatial coherency. To solve this, we empirically investigate various solutions, among which we find that employing a wavelet discriminator to examine the spectral discrepancy effectively suppresses the artifacts. Extensive experiments show the superiority over prior transformer-based GANs, especially on high resolutions, e.g., 1024x1024. The StyleSwin, without complex training strategies, excels over StyleGAN on CelebA-HQ 1024x1024, and achieves on-par performance on FFHQ 1024x1024, proving the promise of using transformers for high-resolution image generation.
Dataset | Resolution | FID | Pretrained Model |
---|---|---|---|
FFHQ | 256x256 | 2.81 | Google Drive/Azure Storage |
LSUN Church | 256x256 | 2.95 | Google Drive/Azure Storage |
CelebA-HQ | 256x256 | 3.25 | Google Drive/Azure Storage |
FFHQ | 1024x1024 | 5.07 | Google Drive/Azure Storage |
CelebA-HQ | 1024x1024 | 4.43 | Google Drive/Azure Storage |
To install the dependencies:
python -m pip install -r requirements.txt
Integrated into Huggingface Spaces π€ using Gradio. Try out the Web Demo:
To generate 50k image samples of resolution 1024 and evaluate the fid score:
python -m torch.distributed.launch --nproc_per_node=1 train_styleswin.py --sample_path /path_to_save_generated_samples --size 1024 --ckpt /path/to/checkpoint --eval --val_num_batches 12500 --val_batch_size 4 --eval_gt_path /path_to_real_images_50k
To generate 50k image samples of resolution 256 and evaluate the fid score:
python -m torch.distributed.launch --nproc_per_node=1 train_styleswin.py --sample_path /path_to_save_generated_samples --size 256 --G_channel_multiplier 2 --ckpt /path/to/checkpoint --eval --val_num_batches 12500 --val_batch_size 4 --eval_gt_path /path_to_real_images_50k
When training FFHQ and CelebA-HQ, we use ImageFolder
datasets. The data structure is like this:
FFHQ
βββ images
β βββ 000001.png
β βββ ...
When training LSUN Church, please follow stylegan2-pytorch to create a lmdb dataset first. After this, the data structure is like this:
LSUN Church
βββ data.mdb
βββ lock.mdb
To train a new model of FFHQ-1024 from scratch:
python -m torch.distributed.launch --nproc_per_node=8 train_styleswin.py --batch 2 --path /path_to_ffhq_1024 --checkpoint_path /tmp --sample_path /tmp --size 1024 --D_lr 0.0002 --D_sn --ttur --eval_gt_path /path_to_ffhq_real_images_50k --lr_decay --lr_decay_start_steps 600000
To train a new model of CelebA-HQ 1024 from scratch:
python -m torch.distributed.launch --nproc_per_node=8 train_styleswin.py --batch 2 --path /path_to_celebahq_1024 --checkpoint_path /tmp --sample_path /tmp --size 1024 --D_lr 0.0002 --D_sn --ttur --eval_gt_path /path_to_celebahq_real_images_50k
To train a new model of FFHQ-256 from scratch:
python -m torch.distributed.launch --nproc_per_node=8 train_styleswin.py --batch 4 --path /path_to_ffhq_256 --checkpoint_path /tmp --sample_path /tmp --size 256 --G_channel_multiplier 2 --bcr --D_lr 0.0002 --D_sn --ttur --eval_gt_path /path_to_ffhq_real_images_50k --lr_decay --lr_decay_start_steps 775000 --iter 1000000
To train a new model of CelebA-HQ 256 from scratch:
python -m torch.distributed.launch --nproc_per_node=8 train_styleswin.py --batch 4 --path /path_to_celebahq_256 --checkpoint_path /tmp --sample_path /tmp --size 256 --G_channel_multiplier 2 --bcr --r1 5 --D_lr 0.0002 --D_sn --ttur --eval_gt_path /path_to_celebahq_real_images_50k --lr_decay --lr_decay_start_steps 500000
To train a new model of LSUN Church 256 from scratch:
python -m torch.distributed.launch --nproc_per_node=8 train_styleswin.py --batch 4 --path /path_to_lsun_church_256 --checkpoint_path /tmp --sample_path /tmp --size 256 --G_channel_multiplier 2 --use_flip --r1 5 --lmdb --D_lr 0.0002 --D_sn --ttur --eval_gt_path /path_to_lsun_church_real_images_50k --lr_decay --lr_decay_start_steps 1300000 --iter 1500000
Notice: When training on 16 GB GPUs, you could add --use_checkpoint
to save GPU memory. Besides, we evaluate the fid score every 25000 steps and select the model with the best fid score during training.
Image samples of FFHQ-1024 generated by StyleSwin:
Image samples of CelebA-HQ 1024 generated by StyleSwin:
Latent code interpolation examples of FFHQ-1024 between the left-most and the right-most images:
@misc{zhang2021styleswin,
title={StyleSwin: Transformer-based GAN for High-resolution Image Generation},
author={Bowen Zhang and Shuyang Gu and Bo Zhang and Jianmin Bao and Dong Chen and Fang Wen and Yong Wang and Baining Guo},
year={2021},
eprint={2112.10762},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Our work does not directly modify the exiting images which may alter the identity or expression of the people. We discourage the use of our work in such applications as it is not designed to do so. We have quantitatively verified that the proposed method does not show evident disparity, on gender and ages as the model mostly follows the dataset distribution; however, we encourage additional care if you intend to use the system on certain demographic groups. We also encourage use of fair and representative data when training on customized data. We caution that the high-resolution images produced by our model may potentially be misused for impersonating humans and viable solutions so avoid this include adding tags or watermarks when distributing the generated photos.
This code borrows heavily from stylegan2-pytorch and Swin-Transformer. We also thank the contributors of code Positional Encoding in GANs, DiffAug, StudioGAN and GIQA.
This is the codebase for our research work. Please open a GitHub issue for any help. If you have any questions regarding the technical details, feel free to contact [email protected] or [email protected].
The codes and the pretrained model in this repository are under the MIT license as specified by the LICENSE file. We use our labeled dataset to train the scratch detection model.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.