Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
4darsh-Dev
/
Meta-Llama-3-8B-quantized-GPTQ
like
1
Text Generation
PEFT
English
llama
llama-3-8b
llama-3-8b-quantized
llama-3-8b-autogptq
meta
quantized
4-bit precision
gptq
License:
other
Model card
Files
Files and versions
xet
Community
Use this model
main
Meta-Llama-3-8B-quantized-GPTQ
5.74 GB
1 contributor
History:
4 commits
4darsh-Dev
updated readme
b613dae
verified
over 1 year ago
.gitattributes
1.52 kB
initial commit
over 1 year ago
README.md
557 Bytes
updated readme
over 1 year ago
config.json
1.03 kB
Upload of AutoGPTQ quantized model
over 1 year ago
gptq_model-4bit-128g.bin
5.74 GB
xet
Upload of AutoGPTQ quantized model
over 1 year ago
quantize_config.json
265 Bytes
Upload of AutoGPTQ quantized model
over 1 year ago