Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
codewithdark
/
Llama-3.2-1B-2bit-gguf
like
0
GGUF
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
Llama-3.2-1B-2bit-gguf
/
quant_config.json
codewithdark
Add 2-bit Q2_K GGUF model quantized from meta-llama/Llama-3.2-1B (2025-06-25 10:08:02)
6c2cf6a
verified
9 months ago
raw
Copy download link
history
blame
contribute
delete
Safe
60 Bytes
{
"bits"
:
2
,
"quant_type"
:
"Q2_K"
,
"group_size"
:
128
}