Upload tokenizer
68b63f7 verified - 1.52 kB initial commit
- 5.17 kB Upload Phi3ForCausalLM
- 1.74 kB Upload Phi3ForCausalLM
- 152 Bytes Upload Phi3ForCausalLM
- 917 kB Upload tokenizer
pytorch_model-00001-of-00004.bin Detected Pickle imports (20)
- "torchao.quantization.linear_activation_quantized_tensor.LinearActivationQuantizedTensor",
- "torch.storage.UntypedStorage",
- "torch.BFloat16Storage",
- "torch._utils._rebuild_tensor_v2",
- "torchao.float8.inference.Float8MMConfig",
- "torchao.dtypes.floatx.float8_layout.Float8Layout",
- "torch._utils._rebuild_wrapper_subclass",
- "torch.FloatStorage",
- "torchao.quantization.quant_api._input_activation_quant_func_fp8",
- "torch.serialization._get_layout",
- "torch.float8_e4m3fn",
- "torchao.quantization.quant_primitives.ZeroPointDomain",
- "torch.bfloat16",
- "collections.OrderedDict",
- "torch._utils._rebuild_tensor_v3",
- "torch._tensor._rebuild_from_type_v2",
- "torchao.quantization.granularity.PerRow",
- "torchao.dtypes.floatx.float8_layout.Float8AQTTensorImpl",
- "torchao.dtypes.affine_quantized_tensor.AffineQuantizedTensor",
- "torch.device"
How to fix it?
4.84 GB Upload Phi3ForCausalLM pytorch_model-00002-of-00004.bin Detected Pickle imports (20)
- "torchao.quantization.quant_api._input_activation_quant_func_fp8",
- "torchao.quantization.granularity.PerRow",
- "torchao.dtypes.affine_quantized_tensor.AffineQuantizedTensor",
- "collections.OrderedDict",
- "torchao.quantization.quant_primitives.ZeroPointDomain",
- "torch._utils._rebuild_wrapper_subclass",
- "torchao.dtypes.floatx.float8_layout.Float8Layout",
- "torch.BFloat16Storage",
- "torch.device",
- "torch._utils._rebuild_tensor_v3",
- "torch._utils._rebuild_tensor_v2",
- "torch.storage.UntypedStorage",
- "torch.serialization._get_layout",
- "torch.FloatStorage",
- "torch.float8_e4m3fn",
- "torch.bfloat16",
- "torchao.quantization.linear_activation_quantized_tensor.LinearActivationQuantizedTensor",
- "torch._tensor._rebuild_from_type_v2",
- "torchao.dtypes.floatx.float8_layout.Float8AQTTensorImpl",
- "torchao.float8.inference.Float8MMConfig"
How to fix it?
4.96 GB Upload Phi3ForCausalLM pytorch_model-00003-of-00004.bin Detected Pickle imports (20)
- "torchao.quantization.linear_activation_quantized_tensor.LinearActivationQuantizedTensor",
- "torchao.dtypes.floatx.float8_layout.Float8AQTTensorImpl",
- "torch._tensor._rebuild_from_type_v2",
- "torch._utils._rebuild_wrapper_subclass",
- "torch.bfloat16",
- "torch.serialization._get_layout",
- "torchao.quantization.quant_primitives.ZeroPointDomain",
- "torchao.dtypes.affine_quantized_tensor.AffineQuantizedTensor",
- "torch.storage.UntypedStorage",
- "torchao.float8.inference.Float8MMConfig",
- "torchao.dtypes.floatx.float8_layout.Float8Layout",
- "torch.float8_e4m3fn",
- "collections.OrderedDict",
- "torch.BFloat16Storage",
- "torchao.quantization.quant_api._input_activation_quant_func_fp8",
- "torch._utils._rebuild_tensor_v2",
- "torch._utils._rebuild_tensor_v3",
- "torch.device",
- "torchao.quantization.granularity.PerRow",
- "torch.FloatStorage"
How to fix it?
4.87 GB Upload Phi3ForCausalLM - 1.03 GB Upload Phi3ForCausalLM
- 20.4 kB Upload Phi3ForCausalLM
- 463 Bytes Upload tokenizer
- 7.15 MB Upload tokenizer
- 17.8 kB Upload tokenizer
- 1.61 MB Upload tokenizer