Safetensors
clip
megaelius commited on
Commit
65b4378
·
verified ·
1 Parent(s): 803f02a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -4,10 +4,10 @@ datasets:
4
  - ILSVRC/imagenet-1k
5
  - mlfoundations/datacomp_small
6
  base_model:
7
- - laion/CLIP-ViT-g-14-laion2B-s34B-b88K
8
  ---
9
 
10
- Model Initialized from `laion/CLIP-ViT-g-14-laion2B-s34B-b88K`. The image encoder is finetuned with FARE at $\epsilon=2/255$. The text encoder is finetuned with LEAF at $k=1$ with $\rho=50$.
11
 
12
  To load this model use:
13
 
@@ -15,7 +15,7 @@ To load this model use:
15
  from transformers import CLIPProcessor, CLIPModel
16
 
17
  model_name = "LEAF-CLIP/OpenCLIP-ViT-g-rho50-k1-FARE2"
18
- processor_name = "laion/CLIP-ViT-g-14-laion2B-s34B-b88K"
19
 
20
  model = CLIPModel.from_pretrained(model_name)
21
  processor = CLIPProcessor.from_pretrained(processor_name)
 
4
  - ILSVRC/imagenet-1k
5
  - mlfoundations/datacomp_small
6
  base_model:
7
+ - laion/CLIP-ViT-g-14-laion2B-s12B-b42K
8
  ---
9
 
10
+ Model Initialized from `laion/CLIP-ViT-g-14-laion2B-s12B-b42K`. The image encoder is finetuned with FARE at $\epsilon=2/255$. The text encoder is finetuned with LEAF at $k=1$ with $\rho=50$.
11
 
12
  To load this model use:
13
 
 
15
  from transformers import CLIPProcessor, CLIPModel
16
 
17
  model_name = "LEAF-CLIP/OpenCLIP-ViT-g-rho50-k1-FARE2"
18
+ processor_name = "laion/CLIP-ViT-g-14-laion2B-s12B-b42K"
19
 
20
  model = CLIPModel.from_pretrained(model_name)
21
  processor = CLIPProcessor.from_pretrained(processor_name)