cs-giung's picture
Update README.md
c6d3bbb verified
metadata
license: cc-by-nc-4.0

CLIP

Contrastive Language-Image Pretraining (CLIP) model pre-trained on 2.5 billion data points of CommonCrawl at resolution 224x224. It was introduced in the paper Learning Transferable Visual Models From Natural Language Supervision and further reproduced in the follow-up paper Demystifying CLIP Data. The weights were converted from the b16_fullcc2.5b.pt file presented in the original repository.