Posted today by                @cloneofsimo,                  @andreasjansson,                  @anotherjesse,             and     @zeke

Introducing LoRA: A faster way to fine-tune Stable Diffusion - Replicate – Replicate

submited by
Style Pass
2023-02-07 16:00:19

Posted today by @cloneofsimo, @andreasjansson, @anotherjesse, and @zeke

A few short months later, Simo Ryu has created a new technique called LoRA. Similar to DreamBooth, LoRA lets you train Stable Diffusion using just a few images, and it generates new output images with those objects or styles. Unlike DreamBooth, LoRA is fast: While DreamBooth takes around twenty minutes to run and produces models that are several gigabytes, LoRA trains in as little as eight minutes and produces models that are around 5MB.

LoRA stands for low-rank adaptation, a mathematical technique to reduce the number of parameters that are trained. You can think of it like creating a diff of the model, instead of saving the whole thing. Check out the README on GitHub and the paper on arXiv to learn more about how it works.

We've been collaborating with Simo to get LoRA up on Replicate. You can now train LoRA models in the cloud with a single API call. Unlike DreamBooth where you had to wait for a model to push and boot up, LoRA predictions run instantly with no cold boots.

Leave a Comment