Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
nmndeep
/
clip-vit-l-14-336-updated
like
0
PyTorch
clip_vision_model
Model card
Files
Files and versions
xet
Community
1
main
clip-vit-l-14-336-updated
1.22 GB
1 contributor
History:
2 commits
nmndeep
Add OpenCLIP -> HF CLIPVisionModel conversion for LLava
ae61877
verified
5 months ago
.gitattributes
Safe
1.52 kB
initial commit
5 months ago
config.json
Safe
245 Bytes
Add OpenCLIP -> HF CLIPVisionModel conversion for LLava
5 months ago
preprocessor_config.json
Safe
366 Bytes
Add OpenCLIP -> HF CLIPVisionModel conversion for LLava
5 months ago
pytorch_model.bin
pickle
Detected Pickle imports (3)
"collections.OrderedDict"
,
"torch.FloatStorage"
,
"torch._utils._rebuild_tensor_v2"
What is a pickle import?
1.22 GB
xet
Add OpenCLIP -> HF CLIPVisionModel conversion for LLava
5 months ago