For example, starting from a picture of this cat, we can find an adversarial image that has the same hash as the picture of the dog in this post: Neur

anishathalye / neural-hash-collider

submited by
Style Pass
2021-08-19 01:30:03

For example, starting from a picture of this cat, we can find an adversarial image that has the same hash as the picture of the dog in this post:

NeuralHash is a perceptual hash function that uses a neural network. Images are resized to 360x360 and passed through a neural network to produce a 128-dimensional feature vector. Then, the vector is projected onto R^96 using a 128x96 "seed" matrix. Finally, to produce a 96-bit hash, the 96-dimensional vector is thresholded: negative entries turn into a 0 bit, and non-negative entries turn into a 1 bit.

This entire process, except for the thresholding, is differentiable, so we can use gradient descent to find hash collisions. This is a well-known property of neural networks, that they are vulnerable to adversarial examples.

We can define a loss that captures how close an image is to a given target hash: this loss is basically just the NeuralHash algorithm as described above, but with the final "hard" thresholding step tweaked so that it is "soft" (in particular, differentiable). Exactly how this is done (choices of activation functions, parameters, etc.) can affect convergence, so it can require some experimentation. Refer to collide.py to see what the implementation currently does.

Leave a Comment
Related Posts