File size: 1,124 Bytes
7086158 5a0af5a 7086158 5a0af5a 7086158 6928d5a 3051d55 6928d5a 3051d55 6f03adc 6affa6a 18a5687 3051d55 18a5687 3051d55 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
---
library_name: araclip
tags:
- clip
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: https://github.com/Arabic-Clip/Araclip.git
## How to use
```
pip install git+https://github.com/Arabic-Clip/Araclip.git
```
```python
# load model
import numpy as np
from PIL import Image
from araclip import AraClip
model = AraClip.from_pretrained("Arabic-Clip/araclip")
# data
labels = ["قطة جالسة", "قطة تقفز" ,"كلب", "حصان"]
image = Image.open("cat.png")
# embed data
image_features = model.embed(image=image)
text_features = np.stack([model.embed(text=label) for label in labels])
# search for most similar data
similarities = text_features @ image_features
best_match = labels[np.argmax(similarities)]
print(f"The image is most similar to: {best_match}")
# قطة جالسة
```

|