Ente is an end-to-end encrypted photos app. To run algorithms on the photos, all the computation must be done on the edge. Below is the summary of the model and the framework of our implementation for image-text semantic search.
CLIP (Contrastive Language-Image Pre-Training) is the most widely used multi-modal neural network trained on a variety of image-text pairs to learn information across them.
Architecturally, the model consists of two encoders, one for each image and text which produces an output, embedding vector, that is a representation of the information content present in them as inferred by the model.
The training routine for the model is as follows, “Given an image predict which out of a set of 32,768 randomly sampled text snippets, was actually paired with it in our dataset” 1. This particular method forces the model to extract valuable features out of the image to exactly match the textual content as it was penalised in the loss function otherwise.
The “Contrastive” term in CLIP is the method of aligning vector representations of the ground truth pair while diverging all the other combinations using a contrastive loss function.