Testing IQ4_KSS

#2
by shewin - opened

Tensor blk.47.ffn_down_exps.weight buffer type overriden to CPU
llm_load_tensors: offloading 48 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 49/49 layers to GPU
llm_load_tensors: CPU buffer size = 37920.00 MiB
llm_load_tensors: CUDA_Host buffer size = 245.75 MiB
llm_load_tensors: CUDA0 buffer size = 2156.71 MiB
...................................................................................................~ggml_backend_cuda_context: have 0 graphs
.
===================================== llama_init_from_model: f16
llama_init_from_model: n_ctx = 200192
llama_init_from_model: n_batch = 7096
llama_init_from_model: n_ubatch = 7096
llama_init_from_model: flash_attn = 1
llama_init_from_model: attn_max_b = 2048
llama_init_from_model: fused_moe = 1
llama_init_from_model: grouped er = 1
llama_init_from_model: fused_up_gate = 1
llama_init_from_model: fused_mmad = 1
llama_init_from_model: rope_cache = 0
llama_init_from_model: graph_reuse = 1
llama_init_from_model: k_cache_hadam = 0
llama_init_from_model: split_mode_graph_scheduling = 0
llama_init_from_model: reduce_type = f16
llama_init_from_model: sched_async = 0
llama_init_from_model: ser = -1, 0
llama_init_from_model: freq_base = 5000000.0
llama_init_from_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 4767.38 MiB
llama_init_from_model: KV self size = 4692.00 MiB, K (f16): 2346.00 MiB, V (f16): 2346.00 MiB
llama_init_from_model: CUDA_Host output buffer size = 0.58 MiB
llama_init_from_model: CUDA0 compute buffer size = 4197.09 MiB
llama_init_from_model: CUDA_Host compute buffer size = 2768.16 MiB
llama_init_from_model: graph nodes = 101374
llama_init_from_model: graph splits = 98
llama_init_from_model: enabling only_active_experts scheduling

main: n_kv_max = 200192, n_batch = 7096, n_ubatch = 7096, flash_attn = 1, n_gpu_layers = 99, n_threads = 101, n_threads_batch = 101

PP TG N_KV T_PP s S_PP t/s T_TG s S_TG t/s
7096 1774 0 7.529 942.43 33.214 53.41
7096 1774 7096 7.167 990.06 36.596 48.48
7096 1774 14192 7.136 994.39 37.186 47.71
7096 1774 21288 7.144 993.26 37.861 46.86
7096 1774 28384 7.220 982.84 38.111 46.55
7096 1774 35480 7.305 971.38 38.395 46.20

2026-02-23_17-07

Definitely a fast fully offloaded model, seems like it is good enough for some uses too!

Is there a problem with the IK_Llama.cpp version with this model? Am I doing something wrong?
I have a relatively weak configuration: an i5 12400f processor, 96GB of RAM, and a 4070 with 12GB of VRAM.
However, I am having great success using this model with llama.cpp.
Many people praise the IK version of llama for its speed, and I decided to try it with your quantized versions.
The speed is very disappointing.
I am running the Qwen Next Coder model using llama.cpp with the following commands:

llama-server.exe --model G:\LlamaModels\Qwen3-Coder-Next-MXFP4_MOE.gguf --port {PORT} --ctx-size 128000 --fit on --fit-target 512 --fit-ctx 16384 --batch-size 512 --mlock --host 0.0.0.0 --jinja --temp 1.0 --min-p 0.01 --top-p 0.95 --top-k 40

llama-server.exe --model G:\LlamaModels\Qwen3-Coder-Next-UD-Q8_K_XL-00001-of-00003.gguf --port {PORT} --ctx-size 128000 --fit on --fit-target 512 --fit-ctx 16384 --batch-size 512 --mlock --host 0.0.0.0 --jinja --temp 1.0 --min-p 0.01 --top-p 0.95 --top-k 40

I get 20 tokens per second with the first one and 15 tokens per second with the second.
However, the model itself is quite good. The MXFP4 version only starts to slow down noticeably when the filled context exceeds 100,000 tokens.

IK version:

llama-server.exe --model G:\LlamaModels\Qwen3-Coder-Next-IQ4_KSS.gguf --port ${PORT} --ctx-size 128000 -ub 2048 -b 2048 --threads 1 --no-mmap -sm graph --merge-qkv -ger -fa on --n-gpu-layers 99 --jinja -ot ".ffn_.*_exps.=CPU" --host 0.0.0.0 --jinja --temp 1.0 --min-p 0.01 --top-p 0.95 --top-k 40

I get 3-4 tokens per second.
Could the issue be with -ot ".ffn_.*_exps.=CPU"?

@aldubl

Is there a problem with the IK_Llama.cpp version with this model?

No, i've tested it fine just today with some of the latest speed boosts:

sweep-bench-Qwen3-Coder-Next-PR1307

Am I doing something wrong?

Yes. You're running on CPU and only giving it 1 thread for starters. There is no -fit on for ik, you will need to manually allocate additional routed expert layers to make better use of VRAM. Check this guide: https://gist.github.com/DocShotgun/a02a4c0c0a57e43ff4f038b46ca66ae0

Rough command will be someting like:

llama-server.exe --model G:\LlamaModels\Qwen3-Coder-Next-IQ4_KSS.gguf --port ${PORT} --ctx-size 128000 -ub 2048 -b 2048 --threads 6 --no-mmap -ger --merge-qkv -ger -fa on --n-gpu-layers 99 --jinja  -ot "blk\.(0|1|2|3)\.ffn_(gate|up|down)_exps.*=CUDA0" --cpu-moe --host 0.0.0.0 --jinja --temp 1.0 --min-p 0.01 --top-p 0.95 --top-k 40 

Add more layers until it OOms your vram. I removed -sm graph as that is only for 2x or more GPUs

Keep us posted, good luck!

Keep us posted, good luck!

image

Thank you very much! You're awesome, man. With your team's efforts, the generation speed has increased to 17.55 t/s, which is lower than llama.cpp, but the prompt processing speed has reached an impressive 423.93 t/s!
Thank you again, especially for the article. I hope it will help me deepen my understanding. I'll continue experimenting now.

@aldubl

Thank you very much! You're awesome, man. With your team's efforts,

You're welcome! Glad you're getting better results now!

prompt processing speed has reached an impressive 423.93 t/s!

If you're cpu has lscpu | grep avx512_vnni you can get better PP as well on ik_llama.cpp. If you're focusing on PP you can increase batch sizes too e.g. try out -ub 4096 -b 4096 which takes more VRAM but often gives better PP in configurations like yours.

To get back some TG you could possibly add another layer to GPU offload e.g. -ot "blk\.(0|1|2|3|4)\.ffn_(gate|up|down)_exps.*=CUDA0" but its all trade-offs on context length etc. You can also try -ctk q8_0 -ctv q8_0 to use less VRAM on kv-cache to fit more layers to improve TG etc. Yes have fun experimenting!

do you have plan for q8 or bf16 ?

@shewin

No, in general the bf16 and q8_0 produced by all quantizers should be the same assuming they use the mainline llama.cpp convert_hf_to_gguf.py script on the original bf16 safetensors and use a --pure q8_0 with no imatrix as is the standard. Because my public repo quota for hf is limited, I avoid uploading pure models available elsewhere.

tl;dr; i recommend you skip the bf16 and grab this Q8_0: https://huggingface.co/ggml-org/Qwen3-Coder-Next-GGUF/tree/main

Also, new Qwen models coming in now: https://huggingface.co/ubergarm/Qwen3.5-122B-A10B-GGUF

Sign up or log in to comment