nmndeep commited on
Commit
b4171a4
·
verified ·
1 Parent(s): dc7798e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -8
README.md CHANGED
@@ -1,8 +1,39 @@
1
- ---
2
- tags:
3
- - clip
4
- library_name: open_clip
5
- pipeline_tag: zero-shot-image-classification
6
- license: mit
7
- ---
8
- # Model card for CLIC-ViT-B-16-224-CogVLM
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Model Card for CLIC-ViT-B-16-224-CogVLM
2
+
3
+ ## Model Details
4
+
5
+ <!-- Provide the basic links for the model. -->
6
+
7
+ - **Model-details:** : Fine-tuned with CLIC using CogVLM relabelled 1M dataset
8
+
9
+ ## Model Usage
10
+ ### With OpenCLIP
11
+
12
+ ```
13
+ import torch
14
+ from PIL import Image
15
+ import open_clip
16
+
17
+ model, _, image_processor = open_clip.create_model_and_transforms('hf-hub:nmndeep/CLIC-ViT-B-16-224-CogVLM')
18
+
19
+
20
+ image = image_processor(Image.open(urlopen(
21
+ 'https://images.pexels.com/photos/869258/pexels-photo-869258.jpeg?auto=compress&cs=tinysrgb&w=1260&h=750&dpr=1'))).unsqueeze(0)
22
+
23
+ model.eval()
24
+
25
+ tokenizer = open_clip.get_tokenizer(f'hf-hub:nmndeep/CLIC-ViT-B-16-224-CogVLM')
26
+
27
+ texts= ["a diagram", "a dog", "a cat", "snow"]
28
+ text = tokenizer(texts)
29
+
30
+ with torch.no_grad(), torch.autocast("cuda"):
31
+ image_features = model.encode_image(image)
32
+ text_features = model.encode_text(text)
33
+ image_features /= image_features.norm(dim=-1, keepdim=True)
34
+ text_features /= text_features.norm(dim=-1, keepdim=True)
35
+
36
+ text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)
37
+ idx = torch.argmax(text_probs)
38
+ print("Output label:", texts[idx])
39
+ ```