IQ4XS quants work great!

#5
by nimishchaudhari - opened

image
Made a very pretty solar system simulation quite fast

MiniMax-M2.5-IQ4_XS-00001-of-00004.gguf

Prompt

  • Tokens: 19
  • Time: 675.147 ms
  • Speed: 28.14 t/s
    Generation
  • Tokens: 3369
  • Time: 296111.681 ms
  • Speed: 11.38 t/s
    Context
  • n_ctx: 131072
  • n_past: 3402

Not super disappointing, model is all on RAM and only experts and kv cache on GPUs .

Thanks for these quants, I can write complex agentic workflows, plan them with with GPT OSS 120B and then run the agents to develop overnight πŸ«₯

Great to hear!

Yeah in my limited testing the even smaller smol-IQ3_KS is working with opencode running on CPU-only haha... (fast DDR4-6400MT/s RAM thanks l1t Wendell! lol)

If you have free VRAM you could probably offload a few more routed exps if you don't need the additional kv-cache too..

Just finished downloading and it is pretty good! Here are some speed benchmarks:

% llama-bench -m MiniMax-M2.5-IQ4_XS-00001-of-00004.gguf -fa 1 -t 1 -ngl 99 -b 2048 -ub 2048 -d 0,10000                                            
ggml_metal_device_init: tensor API disabled for pre-M5 and pre-A19 devices
ggml_metal_library_init: using embedded metal library
ggml_metal_library_init: loaded in 0.022 sec
ggml_metal_rsets_init: creating a residency set collection (keep_alive = 180 s)
ggml_metal_device_init: GPU name:   MTL0
ggml_metal_device_init: GPU family: MTLGPUFamilyApple7  (1007)
ggml_metal_device_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_device_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_device_init: simdgroup reduction   = true
ggml_metal_device_init: simdgroup matrix mul. = true
ggml_metal_device_init: has unified memory    = true
ggml_metal_device_init: has bfloat            = true
ggml_metal_device_init: has tensor            = false
ggml_metal_device_init: use residency sets    = true
ggml_metal_device_init: use shared buffers    = true
ggml_metal_device_init: recommendedMaxWorkingSetSize  = 134217.73 MB
| model                          |       size |     params | backend    | threads | n_ubatch | fa |            test |                  t/s |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | -------: | -: | --------------: | -------------------: |
| minimax-m2 230B.A10B IQ4_XS - 4.25 bpw | 114.84 GiB |   228.69 B | MTL,BLAS   |       1 |     2048 |  1 |           pp512 |        291.93 Β± 1.00 |
| minimax-m2 230B.A10B IQ4_XS - 4.25 bpw | 114.84 GiB |   228.69 B | MTL,BLAS   |       1 |     2048 |  1 |           tg128 |         37.54 Β± 0.07 |
| minimax-m2 230B.A10B IQ4_XS - 4.25 bpw | 114.84 GiB |   228.69 B | MTL,BLAS   |       1 |     2048 |  1 |  pp512 @ d10000 |        196.77 Β± 0.23 |
| minimax-m2 230B.A10B IQ4_XS - 4.25 bpw | 114.84 GiB |   228.69 B | MTL,BLAS   |       1 |     2048 |  1 |  tg128 @ d10000 |         27.81 Β± 0.10 |

Step 3.5 flash is still my favorite though, as this one is a very tight fit with 32k context

@tarruda

Yeah it seems like MiniMax-M2.5 takes more kv-cache VRAM for similar context. I didn't look into arch details to explain that yet.

You can try adding -khad -ctk q6_0 -ctk q8_0 to save at least half the kv-cache space for longer context over what looks like unquantized full fp16 you're using. Oh wait if you're on mainline llama.cpp just try -ctk q8_0 -ctv q8_0 and can fit more context.

I posted some speed benchmarks here: https://www.reddit.com/r/LocalLLaMA/comments/1r40o83/comment/o58rg7k/

Sign up or log in to comment