In this tutorial, we show how to construct the pix2pix generative adversarial from scratch in TensorFlow, and use it to apply image-to-image tran

pix2pix Generative Adversarial Networks

submited by
Style Pass
2022-05-13 01:00:10

In this tutorial, we show how to construct the pix2pix generative adversarial from scratch in TensorFlow, and use it to apply image-to-image translation of satellite images to maps.

Out of the numerous miracles that Generative Adversarial Networks (GANs) can achieve, such as generating completely new entities from scratch, they also have another fantastic use case for image-to-image translation. In previous blogs, we learned about the implementation of Conditional GANs. These conditional GANs are characterized the ability to perform a task to obtain the required output for the particular input provided to the training network. Before we proceed to one of the variations of these conditional GANs in this article, I would suggest checking CGAN's implementation from this link.

In this article, we will explore one of the variations of GAN that has been gaining tremendous popularity over the recent years due to its ability to translate images from the source to the target domain with high precision: pix2pix GANs. We will start with a brief introduction of the basics required for following this blog before we proceed to understand the pix2pix GAN research paper with an architectural breakdown. Finally, we will develop a satellite image to maps translation project using pix2pix GANs from scratch.

Leave a Comment