Llama.cpp hybrid layer quantization of Qwen3-Coder-Next by Qwen
Original model: https://huggingface.co/Qwen/Qwen3-Coder-Next
The hybrid quant employs different quantization levels on a per layer basis to increase flexibility of trading off performance vs file size. Less parameter bits are used at deep layers and more bits at cortex layers to simulultaneously optimize quantized size and model performance. For this file the layer quants are as follows:
Q4_K_L : Q4_K_M + attn_o = q6_k
Q5_K_L : attn_v = q8_0 attn_o = q6_k ffn_d = q6_k
Q6_K_S : Q6_K
LAYER_TYPES='[
[0 ,"Q5_K_S"],[1 ,"Q4_K_L"],[2 ,"Q4_K_M"],[3 ,"Q4_K_S"],[4 ,"Q4_K_S"],[5 ,"Q4_K_S"],[6 ,"Q4_K_S"],[7 ,"Q4_K_S"],
[8 ,"Q4_K_S"],[9 ,"Q4_K_S"],[10,"Q4_K_S"],[11,"Q4_K_S"],[12,"Q4_K_S"],[13,"Q4_K_S"],[14,"Q4_K_S"],[15,"Q4_K_S"],
[16,"Q4_K_S"],[17,"Q4_K_S"],[18,"Q4_K_S"],[19,"Q4_K_S"],[20,"Q4_K_M"],[21,"Q4_K_S"],[22,"Q4_K_M"],[23,"Q4_K_S"],
[24,"Q4_K_M"],[25,"Q4_K_M"],[26,"Q4_K_M"],[27,"Q4_K_M"],[28,"Q4_K_M"],[29,"Q4_K_M"],[30,"Q4_K_M"],[31,"Q4_K_M"],
[32,"Q4_K_M"],[33,"Q4_K_M"],[34,"Q4_K_M"],[35,"Q4_K_M"],[36,"Q4_K_M"],[37,"Q4_K_M"],[38,"Q4_K_M"],[39,"Q4_K_M"],
[40,"Q4_K_M"],[41,"Q4_K_M"],[42,"Q4_K_M"],[43,"Q4_K_L"],[44,"Q5_K_S"],[45,"Q5_K_M"],[46,"Q5_K_L"],[47,"Q6_K_S"]
]'
FLAGS="--token-embedding-type Q6_K --output-tensor-type Q6_K --layer-types-high"
The layer quants were optimized for 100% success across a small set of code generation test prompts while sized for operation on machines with 48G CPU RAM and one consumer grade GPU (8G VRAM or higher). The model was found to be very sensitive to quantization degradation negatively impacting its ability to generate working code.
Comparison:
| Quant | size | PPL | Comment |
|---|---|---|---|
| Q4_K_M | 48.5e9 | 7.64 | default embed and output |
| Q4_K_H | 48.3e9 | 7.65 | Q6_K embed Q6_K output |
Usage:
This is a 80B parameter coding optimized moe model with 3B activated parameters. It can be efficiently run by offloading expert tensors to CPU via -ot exps=CPU to open up very large context space on even low VRAM GPUs. The smaller size of the optimally quantized parameters will give an effective boost in CPU processing speed due to reducing the memory BW needed to repeatedly copy them from main memory to SIMD regs.
The model cannot be speculated due to use of some kind of recurrent attention scheme which prohibits it. In this particular case the limitation is not severe since experts will have to be run on CPU for most setups which makes speculation impractical anyway. The model was sized at ~48G and should run on a 48G RAM machine. For unknown reasons it appears necessary to offload all experts to CPU; if some expert layers are left on GPU gen rate is cut in half as of llama.cpp b7972. What this most likely means is some backend op the model needs cant run on cuda so it kicks the GPU experts over to CPU dynamically every token, slowing the whole system down. If and when this problems gets fixed in llama.cpp it should be feasilbe to do partial CPU offload and gain some tps speedup over full CPU expert offload.
Rough performance metrics on a 9900k (128G RAM) and 4070 (12G VRAM)
| CPU EXP OFFLOAD | QKV | Context size | gen rate | ot config |
|---|---|---|---|---|
| all | F16 | 256K | 20.1 | OT="-ot exps=CPU -ngl 99" |
| 4-48 | F16 | 256K | 8 | OT="-ot blk\.[4-9]|1[0-9]|2[0-9]|3[0-9]|4[0-7].*exps=CPU -ngl 99" |
| 7-48 | F16 | 128K | 10 | OT="-ot blk\.[7-9]|1[0-9]|2[0-9]|3[0-9]|4[0-7].*exps=CPU -ngl 99" |
| 7-48 | Q8_0 | 256K | 10 | OT="-ot blk\.[7-9]|1[0-9]|2[0-9]|3[0-9]|4[0-7].*exps=CPU -ngl 99" |
| 9-48 | Q8_0 | 128K | 10 | OT="-ot blk\.[9-9]|1[0-9]|2[0-9]|3[0-9]|4[0-7].*exps=CPU -ngl 99" |
High context performance appears to work verified against a needle in haystack prompt. However, prompt processing is too slow to be practically used on very large prompts without a much stronger CPU or full GPU offload of the model.
Benchmarks:
Code evals for the model will eventually be given here: https://huggingface.co/spaces/steampunque/benchlm.
Download the file from below:
| Link | Type | Size/e9 B | Notes |
|---|---|---|---|
| Qwen3-Coder-Next.Q4_K_H.gguf | Q4_K_H | 48.39 B | ~Q4_K_M size |
A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:
- Downloads last month
- 158
Model tree for steampunque/Qwen3-Coder-Next-Hybrid-GGUF
Base model
Qwen/Qwen3-Coder-Next