Glaze is a piece of software that aims to protect artists from having their work used to train machine learning models that mimic their style. Glaze d

Glazing over security | SPY Lab

submited by
Style Pass
2024-07-04 17:00:09

Glaze is a piece of software that aims to protect artists from having their work used to train machine learning models that mimic their style. Glaze does this by “perturbing” an artist’s images, so that training a machine learning model on these perturbed images won’t work (in that, the model trained on the perturbed data won’t generate images that nicely mimic the targeted style). The Glaze authors are heavily pushing this tool as an effective defense (e.g., promoting it with articles in the New York Times).

Unfortunately, this style of perturbation defense against machine learning just doesn’t work. We’ve broken previous versions of schemes just like this. And we recently put out a paper showing that Glaze doesn’t work either (and no, you can’t just patch Glaze and say “there, I fixed it!” because once the tool is broken, any security it provided is irremediably lost). We encourage you to read the paper for details on why these types of schemes don’t work, but that’s not going to be the (main) focus of this post.

Rather, today, we want to talk about how we believe the Glaze team misses the mark on how to properly care for the security of their users, by:

Leave a Comment