Zero-Shot Image Classification
Transformers
PyTorch
Chinese
altclip
Zero-Shot Image Classification
bilingual
en
English
Chinese
Instructions to use BAAI/AltCLIP with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use BAAI/AltCLIP with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("zero-shot-image-classification", model="BAAI/AltCLIP") pipe( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png", candidate_labels=["animals", "humans", "landscape"], )# Load model directly from transformers import AutoProcessor, AutoModelForZeroShotImageClassification processor = AutoProcessor.from_pretrained("BAAI/AltCLIP") model = AutoModelForZeroShotImageClassification.from_pretrained("BAAI/AltCLIP") - Notebooks
- Google Colab
- Kaggle
Commit ·
1eb6dd2
1
Parent(s): b989a7b
Update config.json
Browse files- config.json +1 -1
config.json
CHANGED
|
@@ -7,7 +7,7 @@
|
|
| 7 |
"direct_kd": false,
|
| 8 |
"initializer_factor": 1.0,
|
| 9 |
"logit_scale_init_value": 2.6592,
|
| 10 |
-
"model_type": "
|
| 11 |
"num_layers": 3,
|
| 12 |
"projection_dim": 768,
|
| 13 |
"text_config": {
|
|
|
|
| 7 |
"direct_kd": false,
|
| 8 |
"initializer_factor": 1.0,
|
| 9 |
"logit_scale_init_value": 2.6592,
|
| 10 |
+
"model_type": "text-to-image",
|
| 11 |
"num_layers": 3,
|
| 12 |
"projection_dim": 768,
|
| 13 |
"text_config": {
|