aayushgs commited on
Commit
0916c64
·
verified ·
1 Parent(s): 228603e
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -14,6 +14,7 @@ I just added a custom handler which is required for multimodal models like this
14
  Custom handler present for CLIP model patch32, but not for the CLIP large patch14. Source: https://huggingface.co/philschmid/clip-zero-shot-image-classification
15
 
16
  ## Python code to run this after deploying it with HuggingFace's dedicated endpoint
 
17
  import json
18
  from typing import List
19
  import requests as r
@@ -39,7 +40,7 @@ prediction = predict(
39
  )
40
 
41
  prediction
42
-
43
  ## Model Details
44
 
45
  The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not developed for general model deployment - to deploy models like CLIP, researchers will first need to carefully study their capabilities in relation to the specific context they’re being deployed within.
 
14
  Custom handler present for CLIP model patch32, but not for the CLIP large patch14. Source: https://huggingface.co/philschmid/clip-zero-shot-image-classification
15
 
16
  ## Python code to run this after deploying it with HuggingFace's dedicated endpoint
17
+ ```
18
  import json
19
  from typing import List
20
  import requests as r
 
40
  )
41
 
42
  prediction
43
+ ```
44
  ## Model Details
45
 
46
  The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not developed for general model deployment - to deploy models like CLIP, researchers will first need to carefully study their capabilities in relation to the specific context they’re being deployed within.