Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,8 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: cc-by-nc-4.0
|
| 3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-nc-4.0
|
| 3 |
+
---
|
| 4 |
+
|
| 5 |
+
# CLIP
|
| 6 |
+
|
| 7 |
+
Contrastive Language-Image Pretraining (CLIP) model pre-trained on 2.5 billion data points of CommonCrawl at resolution 224x224. It was introduced in the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) and further reproduced in the follow-up paper [Demystifying CLIP Data](https://arxiv.org/abs/2309.16671).
|
| 8 |
+
The weights were converted from the `b32_fullcc2.5b.pt` file presented in the [original repository](https://github.com/facebookresearch/MetaCLIP).
|