llama
text-generation-inference
Update README.md
76e9031 - configs update README and add config file
- 1.48 kB initial commit
- 3.99 kB Update README.md
- 582 Bytes Change cache = true in config.json to significantly boost inference performance (#1)
- 137 Bytes first epoch pre-release
pytorch_model-00001-of-00003.bin Detected Pickle imports (6)
- "torch.BFloat16Storage",
- "torch.Tensor",
- "collections.OrderedDict",
- "torch.FloatStorage",
- "torch._utils._rebuild_tensor_v2",
- "torch._tensor._rebuild_from_type_v2"
How to fix it?
9.95 GB push up second epoch of wizard-mega pytorch_model-00002-of-00003.bin Detected Pickle imports (6)
- "collections.OrderedDict",
- "torch.FloatStorage",
- "torch._tensor._rebuild_from_type_v2",
- "torch.BFloat16Storage",
- "torch._utils._rebuild_tensor_v2",
- "torch.Tensor"
How to fix it?
9.9 GB push up second epoch of wizard-mega pytorch_model-00003-of-00003.bin Detected Pickle imports (6)
- "torch._tensor._rebuild_from_type_v2",
- "collections.OrderedDict",
- "torch._utils._rebuild_tensor_v2",
- "torch.Tensor",
- "torch.BFloat16Storage",
- "torch.FloatStorage"
How to fix it?
6.18 GB push up second epoch of wizard-mega - 33.4 kB first epoch pre-release
- 411 Bytes first epoch pre-release
- 1.84 MB first epoch pre-release
- 500 kB first epoch pre-release
- 700 Bytes first epoch pre-release