Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
buthainaaa
/
Fanar-1-9B-Instruct-GPTQ
like
0
Safetensors
Arabic
English
gemma2
awq
quantized
4bit
vllm
fanar
compressed-tensors
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
main
Fanar-1-9B-Instruct-GPTQ
5.23 GB
1 contributor
History:
3 commits
buthainaaa
Update README.md
b262c1c
verified
6 days ago
.gitattributes
Safe
1.57 kB
Initial upload of AWQ-quantized model
6 days ago
README.md
577 Bytes
Update README.md
6 days ago
chat_template.jinja
Safe
771 Bytes
Initial upload of AWQ-quantized model
6 days ago
config.json
2.8 kB
Initial upload of AWQ-quantized model
6 days ago
generation_config.json
Safe
168 Bytes
Initial upload of AWQ-quantized model
6 days ago
model-00001-of-00002.safetensors
4.98 GB
xet
Initial upload of AWQ-quantized model
6 days ago
model-00002-of-00002.safetensors
231 MB
xet
Initial upload of AWQ-quantized model
6 days ago
model.safetensors.index.json
92.4 kB
Initial upload of AWQ-quantized model
6 days ago
recipe.yaml
Safe
264 Bytes
Initial upload of AWQ-quantized model
6 days ago
special_tokens_map.json
Safe
555 Bytes
Initial upload of AWQ-quantized model
6 days ago
tokenizer.json
Safe
18.1 MB
xet
Initial upload of AWQ-quantized model
6 days ago
tokenizer.model
Safe
2.12 MB
xet
Initial upload of AWQ-quantized model
6 days ago
tokenizer_config.json
Safe
46.4 kB
Initial upload of AWQ-quantized model
6 days ago