Instructions to use openai/clip-vit-large-patch14 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use openai/clip-vit-large-patch14 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("zero-shot-image-classification", model="openai/clip-vit-large-patch14") pipe( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png", candidate_labels=["animals", "humans", "landscape"], )# Load model directly from transformers import AutoProcessor, AutoModelForZeroShotImageClassification processor = AutoProcessor.from_pretrained("openai/clip-vit-large-patch14") model = AutoModelForZeroShotImageClassification.from_pretrained("openai/clip-vit-large-patch14") - Notebooks
- Google Colab
- Kaggle
Add widget example input
#6
by mishig HF Staff - opened
README.md
CHANGED
|
@@ -1,6 +1,10 @@
|
|
| 1 |
---
|
| 2 |
tags:
|
| 3 |
- vision
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
---
|
| 5 |
|
| 6 |
# Model Card: CLIP
|
|
|
|
| 1 |
---
|
| 2 |
tags:
|
| 3 |
- vision
|
| 4 |
+
widget:
|
| 5 |
+
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
|
| 6 |
+
candidate_labels: playing music, playing sports
|
| 7 |
+
example_title: Cat & Dog
|
| 8 |
---
|
| 9 |
|
| 10 |
# Model Card: CLIP
|