Safetensors
clip
megaelius commited on
Commit
59952c2
·
verified ·
1 Parent(s): bb08f11

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -7,14 +7,14 @@ base_model:
7
  - laion/CLIP-ViT-g-14-laion2B-s34B-b88K
8
  ---
9
 
10
- Model Initialized from `laion/CLIP-ViT-g-14-laion2B-s34B-b88K`. The image encoder is finetuned with FARE at $\epsilon=2/255$.
11
  To load this model use:
12
 
13
  ```python
14
  from transformers import CLIPProcessor, CLIPModel
15
 
16
  model_name = "LEAF-CLIP/OpenCLIP-ViT-g-FARE2"
17
- processor_name = "laion/CLIP-ViT-g-14-laion2B-s34B-b88K"
18
 
19
  model = CLIPModel.from_pretrained(model_name)
20
  processor = CLIPProcessor.from_pretrained(processor_name)
 
7
  - laion/CLIP-ViT-g-14-laion2B-s34B-b88K
8
  ---
9
 
10
+ Model Initialized from `laion/CLIP-ViT-g-14-laion2B-s12B-b42K`. The image encoder is finetuned with FARE at $\epsilon=2/255$.
11
  To load this model use:
12
 
13
  ```python
14
  from transformers import CLIPProcessor, CLIPModel
15
 
16
  model_name = "LEAF-CLIP/OpenCLIP-ViT-g-FARE2"
17
+ processor_name = "laion/CLIP-ViT-g-14-laion2B-s12B-b42K"
18
 
19
  model = CLIPModel.from_pretrained(model_name)
20
  processor = CLIPProcessor.from_pretrained(processor_name)