This repository is re-create the results from the paper Bridging Mini-Batch and Asymptotic Analysis in Contrastive Learning: From InfoNCE to Kernel-Ba

Search code, repositories, users, issues, pull requests...

submited by
Style Pass
2024-06-11 02:00:03

This repository is re-create the results from the paper Bridging Mini-Batch and Asymptotic Analysis in Contrastive Learning: From InfoNCE to Kernel-Based Losses. It compare's the performance of different contrastive learning loss functions on the CIFAR-100 dataset & on text-pretraining for MS MARCO passage ranking dataset. The following loss functions are compared:

The following image shows the comparison of the loss functions. We can see that the DHEL / DCL loss converges faster than the other loss functions, and in the order mentioned in the paper

Leave a Comment