Jiani Huang
Delete laser_model_v1.pt
fcab1a3 verified - 1.52 kB initial commit
- 31 Bytes initial commit
laser_model_v1.pkl Detected Pickle imports (32)
- "transformers.models.clip.processing_clip.CLIPProcessor",
- "transformers.models.clip.modeling_clip.CLIPMLP",
- "torch._utils._rebuild_parameter",
- "transformers.models.clip.modeling_clip.CLIPSdpaAttention",
- "tokenizers.models.Model",
- "transformers.models.clip.modeling_clip.CLIPModel",
- "torch.FloatStorage",
- "transformers.models.clip.tokenization_clip_fast.CLIPTokenizerFast",
- "tokenizers.AddedToken",
- "transformers.models.clip.modeling_clip.CLIPEncoder",
- "transformers.models.clip.configuration_clip.CLIPConfig",
- "torch.nn.modules.sparse.Embedding",
- "transformers.models.clip.modeling_clip.CLIPVisionEmbeddings",
- "tokenizers.Tokenizer",
- "transformers.activations.QuickGELUActivation",
- "torch.LongStorage",
- "__builtin__.set",
- "torch.nn.modules.normalization.LayerNorm",
- "transformers.models.clip.modeling_clip.CLIPTextTransformer",
- "transformers.models.clip.configuration_clip.CLIPTextConfig",
- "llava_clip_model_v3.PredicateModel",
- "_codecs.encode",
- "torch.nn.modules.conv.Conv2d",
- "collections.OrderedDict",
- "transformers.models.clip.configuration_clip.CLIPVisionConfig",
- "transformers.models.clip.modeling_clip.CLIPVisionTransformer",
- "transformers.models.clip.modeling_clip.CLIPEncoderLayer",
- "torch._utils._rebuild_tensor_v2",
- "torch.nn.modules.container.ModuleList",
- "transformers.models.clip.image_processing_clip.CLIPImageProcessor",
- "torch.nn.modules.linear.Linear",
- "transformers.models.clip.modeling_clip.CLIPTextEmbeddings"
How to fix it?
1.82 GB Reformatted weights files (#1) - 1.82 GB Reformatted weights files (#1)