Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Website
Tasks
HuggingChat
Collections
Languages
Organizations
Community
Blog
Posts
Daily Papers
Learn
Discord
Forum
GitHub
Solutions
Team & Enterprise
Hugging Face PRO
Enterprise Support
Inference Providers
Inference Endpoints
Storage Buckets
Log In
Sign Up
buthainaaa
/
Fanar-1-9B-Instruct-GPTQ
like
0
Safetensors
Arabic
English
gemma2
awq
quantized
4bit
vllm
fanar
compressed-tensors
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
main
Fanar-1-9B-Instruct-GPTQ
Ctrl+K
Ctrl+K
1 contributor
History:
3 commits
buthainaaa
Update README.md
b262c1c
verified
3 months ago
.gitattributes
Safe
1.57 kB
Initial upload of AWQ-quantized model
3 months ago
README.md
577 Bytes
Update README.md
3 months ago
chat_template.jinja
Safe
771 Bytes
Initial upload of AWQ-quantized model
3 months ago
config.json
2.8 kB
Initial upload of AWQ-quantized model
3 months ago
generation_config.json
Safe
168 Bytes
Initial upload of AWQ-quantized model
3 months ago
model-00001-of-00002.safetensors
4.98 GB
xet
Initial upload of AWQ-quantized model
3 months ago
model-00002-of-00002.safetensors
231 MB
xet
Initial upload of AWQ-quantized model
3 months ago
model.safetensors.index.json
92.4 kB
Initial upload of AWQ-quantized model
3 months ago
recipe.yaml
Safe
264 Bytes
Initial upload of AWQ-quantized model
3 months ago
special_tokens_map.json
Safe
555 Bytes
Initial upload of AWQ-quantized model
3 months ago
tokenizer.json
Safe
18.1 MB
xet
Initial upload of AWQ-quantized model
3 months ago
tokenizer.model
Safe
2.12 MB
xet
Initial upload of AWQ-quantized model
3 months ago
tokenizer_config.json
Safe
46.4 kB
Initial upload of AWQ-quantized model
3 months ago