YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

πŸš€ How to use SafeR-CLIP

SafeR-CLIP is a drop-in replacement for standard CLIP models. You can load it using the HuggingFace transformers library.

Installation

pip install torch torchvision transformers

Load SafeR-CLIP

import torch from transformers import CLIPModel, CLIPProcessor

model_id = "Adeely93/SafeR_CLIP"

processor = CLIPProcessor.from_pretrained(model_id) model = CLIPModel.from_pretrained( model_id, torch_dtype=torch.float16, device_map="auto" )

model.eval()


license: cc-by-nc-4.0

Downloads last month
381
Safetensors
Model size
0.4B params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support