How to use waelhasan/clip-vit-base-patch32 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-classification", model="waelhasan/clip-vit-base-patch32") pipe("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png")
# Load model directly from transformers import AutoProcessor, AutoModelForImageClassification processor = AutoProcessor.from_pretrained("waelhasan/clip-vit-base-patch32") model = AutoModelForImageClassification.from_pretrained("waelhasan/clip-vit-base-patch32")
The community tab is the place to discuss and collaborate with the HF community!