Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
marcelone
/
Jinx-Qwen3-8B-gguf
like
2
GGUF
conversational
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
Jinx-Qwen3-8B-gguf
107 GB
Ctrl+K
Ctrl+K
1 contributor
History:
6 commits
marcelone
Update README.md
75d0c54
verified
8 months ago
.gitattributes
Safe
2.13 kB
Upload Jinx-Qwen3-8B-gguf-q6_k_oe32.gguf
8 months ago
Jinx-Qwen3-8B-gguf-bf16.gguf
16.4 GB
xet
Upload folder using huggingface_hub
8 months ago
Jinx-Qwen3-8B-gguf-f32.gguf
32.8 GB
xet
Upload folder using huggingface_hub
8 months ago
Jinx-Qwen3-8B-gguf-iq4_nl_oe.gguf
6.45 GB
xet
Upload folder using huggingface_hub
8 months ago
Jinx-Qwen3-8B-gguf-q4_k_oe.gguf
6.66 GB
xet
Upload folder using huggingface_hub
8 months ago
Jinx-Qwen3-8B-gguf-q5_k_oe.gguf
7.4 GB
xet
Upload folder using huggingface_hub
8 months ago
Jinx-Qwen3-8B-gguf-q6_k_oe.gguf
8.19 GB
xet
Upload folder using huggingface_hub
8 months ago
Jinx-Qwen3-8B-gguf-q6_k_oe32.gguf
10.7 GB
xet
Upload Jinx-Qwen3-8B-gguf-q6_k_oe32.gguf
8 months ago
Jinx-Qwen3-8B-gguf-q8_0.gguf
8.71 GB
xet
Upload folder using huggingface_hub
8 months ago
Jinx-Qwen3-8B-gguf-q8_0_oe.gguf
9.88 GB
xet
Upload folder using huggingface_hub
8 months ago
README.md
257 Bytes
Update README.md
8 months ago