Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
RedHatAI
/
granite-3.1-8b-base-quantized.w4a16
like
1
Follow
Red Hat AI
2.17k
Text Generation
Safetensors
English
granite
w4a16
int4
vllm
compressed-tensors
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
2
refs/pr/1
granite-3.1-8b-base-quantized.w4a16
4.92 GB
Ctrl+K
Ctrl+K
5 contributors
History:
14 commits
ekurtic
Update README.md
8354230
verified
about 1 year ago
.gitattributes
Safe
1.52 kB
initial commit
about 1 year ago
README.md
15.7 kB
Update README.md
about 1 year ago
config.json
Safe
13.7 kB
Upload model files
about 1 year ago
generation_config.json
Safe
132 Bytes
Upload model files
about 1 year ago
merges.txt
Safe
442 kB
Upload model files
about 1 year ago
model.safetensors
4.92 GB
xet
Upload model files
about 1 year ago
recipe.yaml
Safe
336 Bytes
Upload model files
about 1 year ago
special_tokens_map.json
Safe
1.02 kB
Upload model files
about 1 year ago
tokenizer.json
Safe
3.48 MB
Upload model files
about 1 year ago
tokenizer_config.json
Safe
4.16 kB
Upload model files
about 1 year ago
vocab.json
Safe
777 kB
Upload model files
about 1 year ago