Fantastic performance!
I use this model with Opencode and especially the prompt eval is crazy fast. Well done!
Thank you for the kind words of appreciation! This is the kind of feedback that motivates us! If you find any weird edge cases from the models, or cases where they don't perform well at all, let us know! We're trying to amplify our benchmarking as much as possible and we want to be able to capture those in the future
I managed to run some benchmarks against other IQ4 and Q4 models. This is a system that is a bit CPU bound with an old Xeon-W with only 3.6Ghz and 2 x 5060 Ti 16GB connected at PCIe3.0 x8. My primary use case is local coding and I use opencode (which I really like). Therefore prompt eval is my main concern. tg is not that important but when the agent is reading lots of files or docs to explore a solution, waiting for prompt eval is really annoying.
Also in this light I set the benchmark to run prompt eval at 32768 tokens. I find these long contexts is where this model works really well.
I compiled llama.cpp for blackwell support with CUDA arch 120a.
llama-cpp-server | load_backend: loaded BLAS backend from /app/libggml-blas.so
llama-cpp-server | ggml_cuda_init: found 2 CUDA devices:
llama-cpp-server | Device 0: NVIDIA GeForce RTX 5060 Ti, compute capability 12.0, VMM: yes
llama-cpp-server | Device 1: NVIDIA GeForce RTX 5060 Ti, compute capability 12.0, VMM: yes
llama-cpp-server | load_backend: loaded CUDA backend from /app/libggml-cuda.so
llama-cpp-server | load_backend: loaded CPU backend from /app/libggml-cpu-skylakex.so
llama-cpp-server | main: n_parallel is set to auto, using n_parallel = 4 and kv_unified = true
llama-cpp-server | build: 8125 (e877ad8bd) with GNU 13.3.0 for Linux x86_64
llama-cpp-server | system info: n_threads = 4, n_threads_batch = 4, total_threads = 4
llama-cpp-server |
llama-cpp-server | system_info: n_threads = 4 (n_threads_batch = 4) / 4 | CUDA : ARCHS = 1200 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | BLACKWELL_NATIVE_FP4 = 1 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | BMI2 = 1 | AVX512 = 1 | LLAMAFILE = 1 | OPENMP = 1 | REPACK = 1 |
llama-cpp-server |
llama-cpp-server | | model | size | params | backend | threads | test | t/s | Full Model |
llama-cpp-server | | -------------------------------------------- | ---------: | ---------: | ---------- | ------: | --------------: | -------------------: | ---------------------------------------------------------: |
llama-cpp-server | | deepseek2 30B.A3B IQ4_XS - 4.25 bpw | 15.15 GiB | 29.94 B | BLAS,CUDA | 4 | pp512 | 2553.33 ± 4.81 | unsloth/GLM-4.7-Flash-IQ4_XS.gguf |
llama-cpp-server | | deepseek2 30B.A3B IQ4_XS - 4.25 bpw | 15.15 GiB | 29.94 B | BLAS,CUDA | 4 | pp1024 | 2728.10 ± 5.33 | |
llama-cpp-server | | deepseek2 30B.A3B IQ4_XS - 4.25 bpw | 15.15 GiB | 29.94 B | BLAS,CUDA | 4 | pp32768 | 740.10 ± 1.10 | |
llama-cpp-server | | deepseek2 30B.A3B IQ4_XS - 4.25 bpw | 15.15 GiB | 29.94 B | BLAS,CUDA | 4 | tg128 | 76.51 ± 0.73 | |
llama-cpp-server | | deepseek2 30B.A3B IQ4_XS - 4.25 bpw | 15.15 GiB | 29.94 B | BLAS,CUDA | 4 | tg512 | 71.20 ± 0.34 | |
llama-cpp-server | | deepseek2 30B.A3B IQ4_XS - 4.25 bpw | 15.15 GiB | 29.94 B | BLAS,CUDA | 4 | tg3072 | 53.34 ± 0.04 | |
llama-cpp-server | | deepseek2 30B.A3B Q4_K - Medium | 13.14 GiB | 23.00 B | BLAS,CUDA | 4 | pp512 | 2584.51 ± 10.42 | unsloth/GLM-4.7-Flash-REAP-23B-A3B-Q4_K_M.gguf |
llama-cpp-server | | deepseek2 30B.A3B Q4_K - Medium | 13.14 GiB | 23.00 B | BLAS,CUDA | 4 | pp1024 | 2749.52 ± 16.24 | |
llama-cpp-server | | deepseek2 30B.A3B Q4_K - Medium | 13.14 GiB | 23.00 B | BLAS,CUDA | 4 | pp32768 | 722.07 ± 0.47 | |
llama-cpp-server | | deepseek2 30B.A3B Q4_K - Medium | 13.14 GiB | 23.00 B | BLAS,CUDA | 4 | tg128 | 71.73 ± 0.52 | |
llama-cpp-server | | deepseek2 30B.A3B Q4_K - Medium | 13.14 GiB | 23.00 B | BLAS,CUDA | 4 | tg512 | 66.90 ± 0.29 | |
llama-cpp-server | | deepseek2 30B.A3B Q4_K - Medium | 13.14 GiB | 23.00 B | BLAS,CUDA | 4 | tg3072 | 50.83 ± 0.07 | |
llama-cpp-server | | nemotron_h_moe 31B.A3.5B IQ4_XS - 4.25 bpw | 16.91 GiB | 31.58 B | BLAS,CUDA | 4 | pp512 | 2710.09 ± 17.99 | unsloth/Nemotron-3-Nano-30B-A3B-IQ4_XS.gguf |
llama-cpp-server | | nemotron_h_moe 31B.A3.5B IQ4_XS - 4.25 bpw | 16.91 GiB | 31.58 B | BLAS,CUDA | 4 | pp1024 | 3451.64 ± 16.10 | |
llama-cpp-server | | nemotron_h_moe 31B.A3.5B IQ4_XS - 4.25 bpw | 16.91 GiB | 31.58 B | BLAS,CUDA | 4 | pp32768 | 2649.96 ± 5.02 | |
llama-cpp-server | | nemotron_h_moe 31B.A3.5B IQ4_XS - 4.25 bpw | 16.91 GiB | 31.58 B | BLAS,CUDA | 4 | tg128 | 113.09 ± 1.26 | |
llama-cpp-server | | nemotron_h_moe 31B.A3.5B IQ4_XS - 4.25 bpw | 16.91 GiB | 31.58 B | BLAS,CUDA | 4 | tg512 | 113.50 ± 0.28 | |
llama-cpp-server | | nemotron_h_moe 31B.A3.5B IQ4_XS - 4.25 bpw | 16.91 GiB | 31.58 B | BLAS,CUDA | 4 | tg3072 | 112.43 ± 0.15 | |
llama-cpp-server | | qwen3moe 30B.A3B IQ4_XS - 4.25 bpw (guessed) | 14.91 GiB | 30.53 B | BLAS,CUDA | 4 | pp512 | 2561.64 ± 13.29 | Byteshape/Qwen3-Coder-30B-A3B-Instruct-IQ4_XS-4.20bpw.gguf |
llama-cpp-server | | qwen3moe 30B.A3B IQ4_XS - 4.25 bpw (guessed) | 14.91 GiB | 30.53 B | BLAS,CUDA | 4 | pp1024 | 2830.24 ± 6.13 | |
llama-cpp-server | | qwen3moe 30B.A3B IQ4_XS - 4.25 bpw (guessed) | 14.91 GiB | 30.53 B | BLAS,CUDA | 4 | pp32768 | 849.71 ± 0.35 | <---- |
llama-cpp-server | | qwen3moe 30B.A3B IQ4_XS - 4.25 bpw (guessed) | 14.91 GiB | 30.53 B | BLAS,CUDA | 4 | tg128 | 108.97 ± 1.89 | |
llama-cpp-server | | qwen3moe 30B.A3B IQ4_XS - 4.25 bpw (guessed) | 14.91 GiB | 30.53 B | BLAS,CUDA | 4 | tg512 | 103.27 ± 0.62 | |
llama-cpp-server | | qwen3moe 30B.A3B IQ4_XS - 4.25 bpw (guessed) | 14.91 GiB | 30.53 B | BLAS,CUDA | 4 | tg3072 | 100.00 ± 0.25 | |
llama-cpp-server | | deepseek2 30B.A3B IQ4_XS - 4.25 bpw | 11.70 GiB | 23.00 B | BLAS,CUDA | 4 | pp512 | 2776.45 ± 14.63 | unsloth/GLM-4.7-Flash-REAP-23B-A3B-IQ4_XS.gguf |
llama-cpp-server | | deepseek2 30B.A3B IQ4_XS - 4.25 bpw | 11.70 GiB | 23.00 B | BLAS,CUDA | 4 | pp1024 | 2938.64 ± 11.51 | |
llama-cpp-server | | deepseek2 30B.A3B IQ4_XS - 4.25 bpw | 11.70 GiB | 23.00 B | BLAS,CUDA | 4 | pp32768 | 731.76 ± 0.33 | |
llama-cpp-server | | deepseek2 30B.A3B IQ4_XS - 4.25 bpw | 11.70 GiB | 23.00 B | BLAS,CUDA | 4 | tg128 | 73.46 ± 0.03 | |
llama-cpp-server | | deepseek2 30B.A3B IQ4_XS - 4.25 bpw | 11.70 GiB | 23.00 B | BLAS,CUDA | 4 | tg512 | 68.25 ± 0.16 | |
llama-cpp-server | | deepseek2 30B.A3B IQ4_XS - 4.25 bpw | 11.70 GiB | 23.00 B | BLAS,CUDA | 4 | tg3072 | 51.71 ± 0.06 | |
I have marked the value that is most important to me. This model is noticeably quicker at pro cessing long prompts. Nemotron is quicker still but also a lot worse at coding.
My previous daily driver was unsloth/GLM-4.7-Flash-REAP-23B-A3B-Q4_K_M.gguf which is "only" about 120t/s slower at prompt eval but these numbers don't really do it justice. The Byteshape model feels a lot faster.
This is incredibly helpful, thank you for taking the time to run and share these benchmarks, especially at 32K context. Really appreciate the detailed setup and comparisons. Feedback like this genuinely helps us improve our work and future releases.
