The Path to StyleGan2 - Implementing the StyleGAN

submited by
Style Pass
2024-09-22 18:00:11

This is the second post on the road to StyleGAN2. In this post we implement the StyleGAN and in the third and final post we will implement StyleGAN2.

You can find the StyleGAN paper here. Note, if I refer to the “the authors” I am referring to Karras et al, they are the authors of the StyleGAN paper.

This post will be a lot shorter than my last post, on the Progressive Growing GAN (PGGAN), because the StyleGAN reuses a lot of the techniques from the PGGAN. As such, I strongly suggest you read the PGGAN post if you haven’t before proceeding (so much is reused from the PGGAN, understanding it is a pre-requisite to understanding the StyleGAN).

We make use of the CelebA-HQ 256 dataset again, it can be found at: https://www.kaggle.com/datasets/badasstechie/celebahq-resized-256x256

The implementation of the StyleGAN makes a few major changes to the Generator (G) architecture, but the underlying structure follows the Progressive Growing GAN (PGGAN) paper. The Discriminator model remains unchanged from the PGGAN. Through modifying the G setup, StyleGAN achieves better image generation than PGGAN. Also, this paper covers some very interesting topics which shed light on the inner workings of GANs.

Leave a Comment