Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
ZeroWw
/
granite-3.2-2b-instruct-GGUF
like
0
Text Generation
GGUF
English
conversational
License:
mit
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
granite-3.2-2b-instruct-GGUF
16.2 GB
Ctrl+K
Ctrl+K
1 contributor
History:
2 commits
ZeroWw
Upload folder using huggingface_hub
698b529
verified
about 1 year ago
.gitattributes
1.94 kB
Upload folder using huggingface_hub
about 1 year ago
README.md
339 Bytes
Upload folder using huggingface_hub
about 1 year ago
granite-3.2-2b-instruct.f16.gguf
Safe
5.07 GB
xet
Upload folder using huggingface_hub
about 1 year ago
granite-3.2-2b-instruct.q5_k.gguf
1.92 GB
xet
Upload folder using huggingface_hub
about 1 year ago
granite-3.2-2b-instruct.q6_k.gguf
2.2 GB
xet
Upload folder using huggingface_hub
about 1 year ago
granite-3.2-2b-instruct.q8_0.gguf
2.79 GB
xet
Upload folder using huggingface_hub
about 1 year ago
granite-3.2-2b-instruct.q8_p.gguf
Safe
2.69 GB
xet
Upload folder using huggingface_hub
about 1 year ago
granite-3.2-2b-instruct.q8q4.gguf
1.57 GB
xet
Upload folder using huggingface_hub
about 1 year ago