updated readme
Browse files
README.md
CHANGED
|
@@ -10,6 +10,35 @@ widget:
|
|
| 10 |
# Model Card: CLIP
|
| 11 |
|
| 12 |
Disclaimer: The model card is taken and modified from the official CLIP repository, it can be found [here](https://github.com/openai/CLIP/blob/main/model-card.md).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
|
| 14 |
## Model Details
|
| 15 |
|
|
|
|
| 10 |
# Model Card: CLIP
|
| 11 |
|
| 12 |
Disclaimer: The model card is taken and modified from the official CLIP repository, it can be found [here](https://github.com/openai/CLIP/blob/main/model-card.md).
|
| 13 |
+
I just added a custom handler which is required for multimodal models like this one.
|
| 14 |
+
Custom handler present for CLIP model patch32, but not for the CLIP large patch14. Source: https://huggingface.co/philschmid/clip-zero-shot-image-classification
|
| 15 |
+
|
| 16 |
+
## Python code to run this after deploying it with HuggingFace's dedicated endpoint
|
| 17 |
+
import json
|
| 18 |
+
from typing import List
|
| 19 |
+
import requests as r
|
| 20 |
+
import base64
|
| 21 |
+
|
| 22 |
+
ENDPOINT_URL = ""
|
| 23 |
+
HF_TOKEN = ""
|
| 24 |
+
|
| 25 |
+
|
| 26 |
+
def predict(path_to_image: str = None, candiates: List[str] = None):
|
| 27 |
+
with open(path_to_image, "rb") as i:
|
| 28 |
+
b64 = base64.b64encode(i.read())
|
| 29 |
+
|
| 30 |
+
payload = {"inputs": {"image": b64.decode("utf-8"), "candiates": candiates}}
|
| 31 |
+
response = r.post(
|
| 32 |
+
ENDPOINT_URL, headers={"Authorization": f"Bearer {HF_TOKEN}"}, json=payload
|
| 33 |
+
)
|
| 34 |
+
return response.json()
|
| 35 |
+
|
| 36 |
+
|
| 37 |
+
prediction = predict(
|
| 38 |
+
path_to_image="/Users/user/Downloads/....", candiates=["Item1", "Item2"]
|
| 39 |
+
)
|
| 40 |
+
|
| 41 |
+
prediction
|
| 42 |
|
| 43 |
## Model Details
|
| 44 |
|