IQ4XS quants work great!

#5
by nimishchaudhari - opened

image
Made a very pretty solar system simulation quite fast

MiniMax-M2.5-IQ4_XS-00001-of-00004.gguf

Prompt

  • Tokens: 19
  • Time: 675.147 ms
  • Speed: 28.14 t/s
    Generation
  • Tokens: 3369
  • Time: 296111.681 ms
  • Speed: 11.38 t/s
    Context
  • n_ctx: 131072
  • n_past: 3402

Not super disappointing, model is all on RAM and only experts and kv cache on GPUs .

Thanks for these quants, I can write complex agentic workflows, plan them with with GPT OSS 120B and then run the agents to develop overnight πŸ«₯

Great to hear!

Yeah in my limited testing the even smaller smol-IQ3_KS is working with opencode running on CPU-only haha... (fast DDR4-6400MT/s RAM thanks l1t Wendell! lol)

If you have free VRAM you could probably offload a few more routed exps if you don't need the additional kv-cache too..

Just finished downloading and it is pretty good! Here are some speed benchmarks:

% llama-bench -m MiniMax-M2.5-IQ4_XS-00001-of-00004.gguf -fa 1 -t 1 -ngl 99 -b 2048 -ub 2048 -d 0,10000                                            
ggml_metal_device_init: tensor API disabled for pre-M5 and pre-A19 devices
ggml_metal_library_init: using embedded metal library
ggml_metal_library_init: loaded in 0.022 sec
ggml_metal_rsets_init: creating a residency set collection (keep_alive = 180 s)
ggml_metal_device_init: GPU name:   MTL0
ggml_metal_device_init: GPU family: MTLGPUFamilyApple7  (1007)
ggml_metal_device_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_device_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_device_init: simdgroup reduction   = true
ggml_metal_device_init: simdgroup matrix mul. = true
ggml_metal_device_init: has unified memory    = true
ggml_metal_device_init: has bfloat            = true
ggml_metal_device_init: has tensor            = false
ggml_metal_device_init: use residency sets    = true
ggml_metal_device_init: use shared buffers    = true
ggml_metal_device_init: recommendedMaxWorkingSetSize  = 134217.73 MB
| model                          |       size |     params | backend    | threads | n_ubatch | fa |            test |                  t/s |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | -------: | -: | --------------: | -------------------: |
| minimax-m2 230B.A10B IQ4_XS - 4.25 bpw | 114.84 GiB |   228.69 B | MTL,BLAS   |       1 |     2048 |  1 |           pp512 |        291.93 Β± 1.00 |
| minimax-m2 230B.A10B IQ4_XS - 4.25 bpw | 114.84 GiB |   228.69 B | MTL,BLAS   |       1 |     2048 |  1 |           tg128 |         37.54 Β± 0.07 |
| minimax-m2 230B.A10B IQ4_XS - 4.25 bpw | 114.84 GiB |   228.69 B | MTL,BLAS   |       1 |     2048 |  1 |  pp512 @ d10000 |        196.77 Β± 0.23 |
| minimax-m2 230B.A10B IQ4_XS - 4.25 bpw | 114.84 GiB |   228.69 B | MTL,BLAS   |       1 |     2048 |  1 |  tg128 @ d10000 |         27.81 Β± 0.10 |

Step 3.5 flash is still my favorite though, as this one is a very tight fit with 32k context

@tarruda

Yeah it seems like MiniMax-M2.5 takes more kv-cache VRAM for similar context. I didn't look into arch details to explain that yet.

You can try adding -khad -ctk q6_0 -ctk q8_0 to save at least half the kv-cache space for longer context over what looks like unquantized full fp16 you're using. Oh wait if you're on mainline llama.cpp just try -ctk q8_0 -ctv q8_0 and can fit more context.

I posted some speed benchmarks here: https://www.reddit.com/r/LocalLLaMA/comments/1r40o83/comment/o58rg7k/

Great to hear!

Yeah in my limited testing the even smaller smol-IQ3_KS is working with opencode running on CPU-only haha... (fast DDR4-6400MT/s RAM thanks l1t Wendell! lol)

If you have free VRAM you could probably offload a few more routed exps if you don't need the additional kv-cache too..

For me the tokens per second go not more than 15 even if I offload some more exps, so I've learnt to swallow the hard pill of working with slow models.

Speaking of fast ram I have a 7000MT/s 192gb but i7-14700F so I couldn't get it working for faster than 5600 MT. Any advice about how can I get it working on faster speeds with 4 sticks on ddd5?

I am using Minimax m2.5 for open claw for now, it does quite good but misses the mark at times. I am not sure if this implementation works correctly for now but still very impressive for a model that I can host myself πŸ˜€

Any advice about how can I get it working on faster speeds with 4 sticks on ddd5?

I think memory bandwidth is the bottleneck for token generation. I have good tk/s generation because my mac has 800GB/s memory bandwidth (OTOH it kinda sucks at prompt processing...)

@nimishchaudhari

I get it working on faster speeds with 4 sticks on ddd5?

4x DIMMs is the "verboten" configuration haha.. I'm running 2x48GB on my home AMD9950X rig and able to get DDR5-6400MT/s and clock about 86GiB/s memory bandwidth using mlc (intel memory latency checker). I bottleneck at over 90% of theoretical max TG (active parameters per token divided by memory read bandwidth)

So you could watch some "Actually Hardcore Overclocking" videos on youtube and spend a couple days tweaking your BIOS and stability testing to OC memory, but likely not worth it given possible instability. I have a thread on how I did it, but different mobo/cpu so likely less interesting to you.

@tarruda

Yeah, mac seems to have worse CPU power which is what PP really needs to calculate the kv-cache. That is pretty amazing bandwidth for TG though!

Sign up or log in to comment