Fashion CLIP ViT-B/32

Fine-tuned CLIP model for fashion image-text retrieval.

Model Details

  • Base model: openai/clip-vit-base-patch32
  • Fine-tuned on a fashion dataset
  • Task: image-text similarity & retrieval

Usage

from transformers import CLIPModel, CLIPProcessor

model = CLIPModel.from_pretrained("rakeshjv2000/fashion-clip-vit-base-patch32")
processor = CLIPProcessor.from_pretrained("rakeshjv2000/fashion-clip-vit-base-patch32")
Downloads last month
37
Safetensors
Model size
0.2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 2 Ask for provider support