Fashion CLIP ViT-B/32
Fine-tuned CLIP model for fashion image-text retrieval.
Model Details
- Base model: openai/clip-vit-base-patch32
- Fine-tuned on a fashion dataset
- Task: image-text similarity & retrieval
Usage
from transformers import CLIPModel, CLIPProcessor
model = CLIPModel.from_pretrained("rakeshjv2000/fashion-clip-vit-base-patch32")
processor = CLIPProcessor.from_pretrained("rakeshjv2000/fashion-clip-vit-base-patch32")
- Downloads last month
- 37