Instructions to use openai/clip-vit-base-patch32 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use openai/clip-vit-base-patch32 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("zero-shot-image-classification", model="openai/clip-vit-base-patch32") pipe( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png", candidate_labels=["animals", "humans", "landscape"], )# Load model directly from transformers import AutoProcessor, AutoModelForZeroShotImageClassification processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32") model = AutoModelForZeroShotImageClassification.from_pretrained("openai/clip-vit-base-patch32") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 9a4182c2d25adc939c80c58f4b79e353c9c7fe75a45646555979356b3e6588ff
- Size of remote file:
- 605 MB
- SHA256:
- 2a2994d89bebd77abba5a554789dd9152a7e25467b79d88f6bd237d2dec5051c
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.