Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
KevinX-Penn28
/
testing
like
0
Safetensors
vine
custom_code
License:
mit
Model card
Files
Files and versions
xet
Community
8353b40
testing
3.63 GB
Ctrl+K
Ctrl+K
1 contributor
History:
5 commits
KevinX-Penn28
Upload VINE model - pipeline
8353b40
verified
5 months ago
.gitattributes
Safe
1.52 kB
initial commit
5 months ago
README.md
Safe
24 Bytes
initial commit
5 months ago
config.json
Safe
1.04 kB
Upload VINE model - pipeline
5 months ago
flattening.py
Safe
4.08 kB
Upload VINE model - model
5 months ago
laser_model_v1.pkl
pickle
Detected Pickle imports (32)
"transformers.models.clip.processing_clip.CLIPProcessor"
,
"transformers.models.clip.modeling_clip.CLIPMLP"
,
"torch._utils._rebuild_parameter"
,
"transformers.models.clip.modeling_clip.CLIPSdpaAttention"
,
"tokenizers.models.Model"
,
"transformers.models.clip.modeling_clip.CLIPModel"
,
"torch.FloatStorage"
,
"transformers.models.clip.tokenization_clip_fast.CLIPTokenizerFast"
,
"tokenizers.AddedToken"
,
"transformers.models.clip.modeling_clip.CLIPEncoder"
,
"transformers.models.clip.configuration_clip.CLIPConfig"
,
"torch.nn.modules.sparse.Embedding"
,
"transformers.models.clip.modeling_clip.CLIPVisionEmbeddings"
,
"tokenizers.Tokenizer"
,
"transformers.activations.QuickGELUActivation"
,
"torch.LongStorage"
,
"__builtin__.set"
,
"torch.nn.modules.normalization.LayerNorm"
,
"transformers.models.clip.modeling_clip.CLIPTextTransformer"
,
"transformers.models.clip.configuration_clip.CLIPTextConfig"
,
"llava_clip_model_v3.PredicateModel"
,
"_codecs.encode"
,
"torch.nn.modules.conv.Conv2d"
,
"collections.OrderedDict"
,
"transformers.models.clip.configuration_clip.CLIPVisionConfig"
,
"transformers.models.clip.modeling_clip.CLIPVisionTransformer"
,
"transformers.models.clip.modeling_clip.CLIPEncoderLayer"
,
"torch._utils._rebuild_tensor_v2"
,
"torch.nn.modules.container.ModuleList"
,
"transformers.models.clip.image_processing_clip.CLIPImageProcessor"
,
"torch.nn.modules.linear.Linear"
,
"transformers.models.clip.modeling_clip.CLIPTextEmbeddings"
How to fix it?
1.82 GB
xet
Upload laser_model_v1.pkl
5 months ago
model.safetensors
1.82 GB
xet
Upload VINE model - model
5 months ago
vine_config.py
Safe
4.42 kB
Upload VINE model - config
5 months ago
vine_model.py
Safe
29.4 kB
Upload VINE model - model
5 months ago
vine_pipeline.py
Safe
30.4 kB
Upload VINE model - pipeline
5 months ago
vis_utils.py
Safe
37.4 kB
Upload VINE model - model
5 months ago