Instructions to use google/siglip-so400m-patch14-224 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use google/siglip-so400m-patch14-224 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("zero-shot-image-classification", model="google/siglip-so400m-patch14-224") pipe( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png", candidate_labels=["animals", "humans", "landscape"], )# Load model directly from transformers import AutoProcessor, AutoModelForZeroShotImageClassification processor = AutoProcessor.from_pretrained("google/siglip-so400m-patch14-224") model = AutoModelForZeroShotImageClassification.from_pretrained("google/siglip-so400m-patch14-224") - Notebooks
- Google Colab
- Kaggle
Why is max_position_embeddings set to only 16 instead of 64?
#2
by lotusjx - opened
Why does this model have a max_position_embeddings of 16 when others usually support 64?
Same Question.