ik_llama.cpp imatrix Quantizations of Qwen/Qwen3-Coder-Next
NOTE ik_llama.cpp can also run your existing GGUFs from bartowski, unsloth, mradermacher, etc if you want to try it out before downloading my quants.
Some of ik's new quants are supported with Nexesenex/croco.cpp fork of KoboldCPP with Windows builds. Also check for ik_llama.cpp windows builds by Thireus here..
These quants provide best in class perplexity for the given memory footprint.
Big Thanks
Shout out to Wendell and the Level1Techs crew, the community Forums, YouTube Channel! BIG thanks for providing BIG hardware expertise and access to run these experiments and make these great quants available to the community!!!
Also thanks to all the folks in the quanting and inferencing community on BeaverAI Club Discord and on r/LocalLLaMA for tips and tricks helping each other run, test, and benchmark all the fun new models! Thanks to huggingface for hosting all these big quants!
Finally, I really appreciate the support from aifoundry.org so check out their open source RISC-V based solutions!
Quant Collection
Perplexity computed against wiki.test.raw. (lower is "better")
These two are just test quants for baseline perplexity comparison and not available for download here:
BF16148.502 GiB (16.010 BPW)- PPL over 584 chunks for n_ctx=512 = 8.2278 +/- 0.06392
Q8_078.982 GiB (8.515 BPW)- PPL over 584 chunks for n_ctx=512 = 8.2239 +/- 0.06389
NOTE: The first split file is much smaller on purpose to only contain metadata, its fine!
IQ4_KSS 39.377 GiB (4.245 BPW)
PPL over 584 chunks for n_ctx=512 = 8.3069 +/- 0.06459
๐ Secret Recipe
#!/usr/bin/env bash
custom="
# 60 Repeating Layers [0-59]
## Gated Attention/Delta Net [Blended 0-59]
blk\..*\.attn_gate\.weight=q8_0
blk\..*\.attn_qkv\.weight=q8_0
blk\..*\.attn_output\.weight=q8_0
blk\..*\.attn_q\.weight=q8_0
blk\..*\.attn_k\.weight=q8_0
blk\..*\.attn_v\.weight=q8_0
blk\..*\.ssm_ba\.weight=q8_0
blk\..*\.ssm_out\.weight=q8_0
# Shared Expert Layers [0-59]
blk\..*\.ffn_down_shexp\.weight=q8_0
blk\..*\.ffn_(gate|up)_shexp\.weight=q8_0
# Routed Experts Layers [0-59]
blk\..*\.ffn_down_exps\.weight=iq4_ks
blk\..*\.ffn_(gate|up)_exps\.weight=iq4_kss
# Non-Repeating Layers
token_embd\.weight=iq6_k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--dry-run \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/Qwen3-Coder-Next-GGUF/imatrix-Qwen3-Coder-Next-BF16.dat \
/mnt/data/models/ubergarm/Qwen3-Coder-Next-GGUF/Qwen3-Coder-Next-512x2.5B-BF16-00001-of-00004.gguf \
/mnt/data/models/ubergarm/Qwen3-Coder-Next-GGUF/Qwen3-Coder-Next-IQ4_KSS.gguf \
IQ4_KSS \
128
smol-IQ3_KS 30.728 GiB (3.313 BPW)
PPL over 584 chunks for n_ctx=512 = 8.4605 +/- 0.06623
๐ Secret Recipe
#!/usr/bin/env bash
custom="
# 60 Repeating Layers [0-59]
## Gated Attention/Delta Net [Blended 0-59]
blk\..*\.attn_gate\.weight=iq6_k
blk\..*\.attn_qkv\.weight=iq6_k
blk\..*\.attn_output\.weight=iq6_k
blk\..*\.attn_q\.weight=iq6_k
blk\..*\.attn_k\.weight=iq6_k
blk\..*\.attn_v\.weight=iq6_k
blk\..*\.ssm_ba\.weight=iq6_k
blk\..*\.ssm_out\.weight=iq6_k
# Shared Expert Layers [0-59]
blk\..*\.ffn_down_shexp\.weight=iq6_k
blk\..*\.ffn_(gate|up)_shexp\.weight=iq6_k
# Routed Experts Layers [0-59]
blk\..*\.ffn_down_exps\.weight=iq3_ks
blk\..*\.ffn_(gate|up)_exps\.weight=iq3_ks
# Non-Repeating Layers
token_embd\.weight=iq4_k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
#--exclude-weights ffn_gate_exps \
#--exclude-weights ffn_up_exps \
#--dry-run \
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/Qwen3-Coder-Next-GGUF/imatrix-Qwen3-Coder-Next-BF16.dat \
/mnt/data/models/ubergarm/Qwen3-Coder-Next-GGUF/Qwen3-Coder-Next-512x2.5B-BF16-00001-of-00004.gguf \
/mnt/data/models/ubergarm/Qwen3-Coder-Next-GGUF/Qwen3-Coder-Next-smol-IQ3_KS.gguf \
IQ3_KS \
128
smol-IQ2_KS 22.097 GiB (2.382 BPW)
PPL over 584 chunks for n_ctx=512 = 9.4488 +/- 0.07565
๐ Secret Recipe
#!/usr/bin/env bash
custom="
# 60 Repeating Layers [0-59]
## Gated Attention/Delta Net [Blended 0-59]
blk\..*\.attn_gate\.weight=q8_0
blk\..*\.attn_qkv\.weight=q8_0
blk\..*\.attn_output\.weight=q8_0
blk\..*\.attn_q\.weight=q8_0
blk\..*\.attn_k\.weight=q8_0
blk\..*\.attn_v\.weight=q8_0
blk\..*\.ssm_ba\.weight=q8_0
blk\..*\.ssm_out\.weight=q8_0
# Shared Expert Layers [0-59]
blk\..*\.ffn_down_shexp\.weight=q8_0
blk\..*\.ffn_(gate|up)_shexp\.weight=q8_0
# Routed Experts Layers [0-59]
blk\..*\.ffn_down_exps\.weight=iq2_ks
blk\..*\.ffn_(gate|up)_exps\.weight=iq2_ks
# Non-Repeating Layers
token_embd\.weight=iq4_k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--dry-run \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/Qwen3-Coder-Next-GGUF/imatrix-Qwen3-Coder-Next-BF16.dat \
/mnt/data/models/ubergarm/Qwen3-Coder-Next-GGUF/Qwen3-Coder-Next-512x2.5B-BF16-00001-of-00004.gguf \
/mnt/data/models/ubergarm/Qwen3-Coder-Next-GGUF/Qwen3-Coder-Next-smol-IQ2_KS.gguf \
IQ2_KS \
128
IQ1_KT 19.056 GiB (2.055 BPW)
PPL over 584 chunks for n_ctx=512 = 9.6513 +/- 0.07696
๐ Secret Recipe
#!/usr/bin/env bash
custom="
# 60 Repeating Layers [0-59]
## Gated Attention/Delta Net [Blended 0-59]
blk\..*\.attn_gate\.weight=iq6_k
blk\..*\.attn_qkv\.weight=iq6_k
blk\..*\.attn_output\.weight=iq6_k
blk\..*\.attn_q\.weight=iq6_k
blk\..*\.attn_k\.weight=iq6_k
blk\..*\.attn_v\.weight=iq6_k
blk\..*\.ssm_ba\.weight=iq6_k
blk\..*\.ssm_out\.weight=iq6_k
# Shared Expert Layers [0-59]
blk\..*\.ffn_down_shexp\.weight=iq6_k
blk\..*\.ffn_(gate|up)_shexp\.weight=iq6_k
# Routed Experts Layers [0-59]
blk\..*\.ffn_down_exps\.weight=iq2_kt
blk\..*\.ffn_(gate|up)_exps\.weight=iq1_kt
# Non-Repeating Layers
token_embd\.weight=iq4_k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
#--dry-run \
#gdb -q --args \
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/Qwen3-Coder-Next-GGUF/imatrix-Qwen3-Coder-Next-BF16.dat \
/mnt/data/models/ubergarm/Qwen3-Coder-Next-GGUF/Qwen3-Coder-Next-512x2.5B-BF16-00001-of-00004.gguf \
/mnt/data/models/ubergarm/Qwen3-Coder-Next-GGUF/Qwen3-Coder-Next-IQ1_KT.gguf \
IQ1_KT \
128
Quick Start
Check some recent model cards for examples on running models.
# Clone and checkout
$ git clone https://github.com/ikawrakow/ik_llama.cpp
$ cd ik_llama.cpp
# Build for hybrid CPU+CUDA
$ cmake -B build -DCMAKE_BUILD_TYPE=Release -DGGML_CUDA=ON
$ cmake --build build --config Release -j $(nproc)
# Download Desired Quants
$ pip install huggingface_hub
$ hf download --local-dir ./ --include=smol-IQ2_XS/*.gguf ubergarm/Qwen3-Coder-Next-GGUF
# Full GPU offload
# For 2 or more GPUs keep an eye on `-sm graph` support:
# https://github.com/ikawrakow/ik_llama.cpp/pull/1292
CUDA_VISIBLE_DEVICES="0,1" \
./build/bin/llama-server \
--model "$model" \
--alias Qwen3-Coder-Next \
-c 262144 \
-fa on \
-ger \
--merge-qkv \
-sm graph \
-ngl 99 \
-ub 2048 -b 2048 \
--threads 1 \
--host 127.0.0.1 \
--port 8080 \
--jinja \
--no-mmap
# Hybrid CPU+GPU
# basically use --n-cpu-moe etc...
echo TODO
# CPU-Only
# Gated delta net CPU-only performance seems slower than other architechtures, ideally have at least 1x GPU for attn/kv-cache
numactl -N "$SOCKET" -m "$SOCKET" \
./build/bin/llama-server \
--model "$model"\
--alias Qwen3-Coder-Next \
--ctx-size 131072 \
-ger \
--merge-qkv \
-ctk q8_0 -ctv q8_0 \
-ub 4096 -b 4096 \
--parallel 1 \
--threads 96 \
--threads-batch 128 \
--numa numactl \
--host 127.0.0.1 \
--port 8080 \
--no-mmap \
--jinja
References
- Downloads last month
- 1,624
2-bit
Model tree for ubergarm/Qwen3-Coder-Next-GGUF
Base model
Qwen/Qwen3-Coder-Next