Instructions to use openai/clip-vit-large-patch14 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use openai/clip-vit-large-patch14 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("zero-shot-image-classification", model="openai/clip-vit-large-patch14") pipe( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png", candidate_labels=["animals", "humans", "landscape"], )# Load model directly from transformers import AutoProcessor, AutoModelForZeroShotImageClassification processor = AutoProcessor.from_pretrained("openai/clip-vit-large-patch14") model = AutoModelForZeroShotImageClassification.from_pretrained("openai/clip-vit-large-patch14") - Notebooks
- Google Colab
- Kaggle
Update config.json
#4
by arda13olmez - opened
- config.json +2 -2
config.json
CHANGED
|
@@ -84,7 +84,7 @@
|
|
| 84 |
"intermediate_size": 3072,
|
| 85 |
"num_attention_heads": 12,
|
| 86 |
"num_hidden_layers": 12,
|
| 87 |
-
"projection_dim": 768
|
| 88 |
},
|
| 89 |
"torch_dtype": "float32",
|
| 90 |
"transformers_version": null,
|
|
@@ -166,6 +166,6 @@
|
|
| 166 |
"num_attention_heads": 16,
|
| 167 |
"num_hidden_layers": 24,
|
| 168 |
"patch_size": 14,
|
| 169 |
-
"projection_dim": 768
|
| 170 |
}
|
| 171 |
}
|
|
|
|
| 84 |
"intermediate_size": 3072,
|
| 85 |
"num_attention_heads": 12,
|
| 86 |
"num_hidden_layers": 12,
|
| 87 |
+
"projection_dim": 768
|
| 88 |
},
|
| 89 |
"torch_dtype": "float32",
|
| 90 |
"transformers_version": null,
|
|
|
|
| 166 |
"num_attention_heads": 16,
|
| 167 |
"num_hidden_layers": 24,
|
| 168 |
"patch_size": 14,
|
| 169 |
+
"projection_dim": 768
|
| 170 |
}
|
| 171 |
}
|