samim2024 commited on
Commit
c758137
·
verified ·
1 Parent(s): 127a188

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -6
README.md CHANGED
@@ -11,13 +11,10 @@ library_name: transformers
11
  ### The CLIP model was pretrained from openai/clip-vit-base-patch32 , to learn about what contributes to robustness in computer vision tasks.
12
  ### The model has the ability to generalize to arbitrary image classification tasks in a zero-shot manner.
13
 
14
- Top predictions:
15
 
16
- Saree: 64.89%
17
- Dupatta: 25.81%
18
- Lehenga: 7.51%
19
- Leggings and Salwar: 0.84%
20
- Women Kurta: 0.44%
21
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/660bc03b5294ca0aada80fb9/Kl8Yd8fwFLtmeDbBLi4Fz.png)
22
 
23
 
 
11
  ### The CLIP model was pretrained from openai/clip-vit-base-patch32 , to learn about what contributes to robustness in computer vision tasks.
12
  ### The model has the ability to generalize to arbitrary image classification tasks in a zero-shot manner.
13
 
 
14
 
15
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/660bc03b5294ca0aada80fb9/GbMx1RHZOdW4vlOn0HfiI.png)
16
+
17
+
 
 
18
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/660bc03b5294ca0aada80fb9/Kl8Yd8fwFLtmeDbBLi4Fz.png)
19
 
20