Conditional face generation experiments using GAN models on CelebA dataset.
- Vanilla DCGAN: a normal DCGAN as described in DCGAN paper, has training stability issues.
- Hinge DCGAN with custom layers: an improved DCGAN with spectral normalization, self-attention, minibatch std and pixelwise normalization, which allows stable training with better visual results than DCGAN.
In order to improve generated image quality, it is also possible to train a model using exponential moving average (EMA) update, as defined in The Unusual Effectiveness of Averaging in GAN Training paper.
The code is based on the update function found here, which allows updating a second generator model's weights using EMA update using the following equation:
wt 1 = (1 - b) * ut (b) * wt (assuming u are the weights of a generator trained via gradient methods)