Instructions to use zer0int/LongCLIP-GmP-ViT-L-14 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use zer0int/LongCLIP-GmP-ViT-L-14 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("zero-shot-image-classification", model="zer0int/LongCLIP-GmP-ViT-L-14") pipe( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png", candidate_labels=["animals", "humans", "landscape"], )# Load model directly from transformers import AutoProcessor, AutoModelForZeroShotImageClassification processor = AutoProcessor.from_pretrained("zer0int/LongCLIP-GmP-ViT-L-14") model = AutoModelForZeroShotImageClassification.from_pretrained("zer0int/LongCLIP-GmP-ViT-L-14") - Notebooks
- Google Colab
- Kaggle
Is the SeaArtLab node necessary?
1
#8 opened over 1 year ago
by
wcyber2077
Your LongClip and sd1.15
6
#6 opened over 1 year ago
by
miasik
i can't really make sense why some models work and some models just will not work at all.
5
#5 opened over 1 year ago
by
kellempxt
Is this only the ContrastiveLoss finetuning? Did you use the Coarse-grained alignment loss proposed in LongClip?
1
#4 opened over 1 year ago
by
cuifeng
"Finally working: Redundant TEXT model for HF inference". Could you do the same thing for this LongClip?
7
#3 opened over 1 year ago
by
kk3dmax
AssertionError: You do not have CLIP state dict!
7
#2 opened over 1 year ago
by
PixelClassisist
SDXL usage
4
#1 opened over 1 year ago
by
apiasecki