Qwen3-Coder-Next-heretic GGUF
Quantized GGUF versions of trohrbaugh/Qwen3-Coder-Next-heretic for use with llama.cpp.
Quantizations
| File | Quant |
|---|---|
| Qwen3-Coder-Next-heretic.Q8_0.gguf | Q8_0 |
| Qwen3-Coder-Next-heretic.Q6_K_M.gguf | Q6_K |
| Qwen3-Coder-Next-heretic.Q4_K_M.gguf | Q4_K_M |
- Downloads last month
- 26
Hardware compatibility
Log In to add your hardware
4-bit
6-bit
8-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for ghecko78/Qwen3-Coder-Next-Heretic-GGUF
Base model
trohrbaugh/Qwen3-Coder-Next-heretic