Instructions to use openai/clip-vit-base-patch32 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use openai/clip-vit-base-patch32 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("zero-shot-image-classification", model="openai/clip-vit-base-patch32") pipe( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png", candidate_labels=["animals", "humans", "landscape"], )# Load model directly from transformers import AutoProcessor, AutoModelForZeroShotImageClassification processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32") model = AutoModelForZeroShotImageClassification.from_pretrained("openai/clip-vit-base-patch32") - Notebooks
- Google Colab
- Kaggle
Fix a small formatting bug
Browse filesThe code example used "python3" in the markdown as the language, but this prevents it from showing up with syntax highlighting. I've just changed it to "python" which fixes the error.
README.md
CHANGED
|
@@ -34,7 +34,7 @@ The original implementation had two variants: one using a ResNet image encoder a
|
|
| 34 |
|
| 35 |
### Use with Transformers
|
| 36 |
|
| 37 |
-
```
|
| 38 |
from PIL import Image
|
| 39 |
import requests
|
| 40 |
|
|
|
|
| 34 |
|
| 35 |
### Use with Transformers
|
| 36 |
|
| 37 |
+
```python
|
| 38 |
from PIL import Image
|
| 39 |
import requests
|
| 40 |
|