Instructions to use openai/clip-vit-large-patch14 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use openai/clip-vit-large-patch14 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("zero-shot-image-classification", model="openai/clip-vit-large-patch14") pipe( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png", candidate_labels=["animals", "humans", "landscape"], )# Load model directly from transformers import AutoProcessor, AutoModelForZeroShotImageClassification processor = AutoProcessor.from_pretrained("openai/clip-vit-large-patch14") model = AutoModelForZeroShotImageClassification.from_pretrained("openai/clip-vit-large-patch14") - Notebooks
- Google Colab
- Kaggle
Deploying Model on sageMaker
#8
by ABDOU9419 - opened
i deployed this model on aws sagemaker and i'm trying to test it by sending this request:
predictor.predict({
"inputs": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png",
"parameters": {
"candidate_labels": ['playing music', 'playing sports']
}
}
)
but the following error is shown
An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (400) from model with message "{
"code": 400,
"type": "InternalServerException",
"message": "You have to specify pixel_values"
}
can you help me to fix this issue please ?
Hey, did you figure it out?
Having the same issues here :(