This guide shows you how to setup the Stable Diffusion web UI in a Gradient Deployment, and get started synthesizing images in just moments with Gradient's powerful GPUs
The popularity of Stable Diffusion has continued to explode further and further as more people catch on to the craze. A powerful, pre-trained version of the Latent Diffusion model, Stable Diffusion is a a diffusion model released last month by the researchers at CompVis. The model was trained using subsets of the LAION 5B dataset, including the high resolution subset for initial training and the "aesthetics" subset for subsequent rounds.
In the end, they were left with an extremely robust model that is capable of simulating and recreating nearly any concept imaginable in visual form with no guidance needed beyond a text prompt input. Be sure to check out our full write up and tech talk on stable diffusion for more information about how this model came to be, the underlying architecture, and a fuller look at its capabilities at launch.
In this article, we will take a look at the AUTOMATIC111 fork of the Stable Diffusion web UI, and show how to spin the web UI up in less than a minute on any Paperspace GPU powered machine.