Which quant is best for Mac?

#6
by thejson - opened

M3 Studio Ultra 96GB. Are the low quants usable to daily use?

Only if you can run headless, Even the IQ2 quants with context are ~90GB total usage with 65768t context, which is enough for chatting, but not coding assistant.

I benchmarked running the IQ2_KS 69.800 GiB (2.622 BPW) with 128k context in 96GB VRAM here: https://www.reddit.com/r/LocalLLaMA/comments/1r40o83/comment/o58rg7k/

The trick is saving some space quantizing kv-cache with -khad -ctk q6_0 -ctv q8_0 and get some context.

Not 100% how good it would run on mac. The mainline-IQ4_NL might be good for a 128GB mac. But 121.234 GiB (4.554 BPW) is probably kind of tight so not enough room for context ... hrm...

@coughmedicine @thejson

I think @tarruda is using mac with iq4_xs, maybe check with them on their quant choice and exact command?

This is the script template I use:

#!/bin/sh -e

model=$HOME/ml-models/huggingface/ubergarm/MiniMax-M2.5-GGUF/IQ4_XS/MiniMax-M2.5-IQ4_XS-00001-of-00004.gguf
ctx=32768
parallel=1
ctx_size=$((ctx * parallel))

llama-server --no-mmap --no-warmup --model $model -np $parallel --temp 1.0 --top-p 0.95 --top-k 40 --ctx-size $ctx_size --jinja -fa on --host 0.0.0.0 -cram 0

Note that I have a 128GB Mac studio exclusively for running LLMs and I don't even login, so idle RAM usage is ~2-3GB. Even so, Minimax M2.5 IQ4_XS takes nearly all the RAM and I have to pass -cram 0 to prevent llama.cpp from increasing RAM due to cached prompts (or else it will swap).

I gave Minimax 2.5 a shot, but honestly I feel like Step 3.5 Flash is still better, my new favorite in that size range.

If you have a 128GB Mac, here's how I run step 3.5 flash:

#!/bin/sh -e

model=$HOME/ml-models/huggingface/ubergarm/Step-3.5-Flash-GGUF/IQ4_XS/Step-3.5-Flash-IQ4_XS-00001-of-00004.gguf
ctx=102400
parallel=2

ctx_size=$((ctx * parallel))

llama-server $swa_arg --no-mmap --no-warmup --model $model --ctx-size $ctx_size -np $parallel -fa on --temp 1.0 -b 2048 -ub 2048 --host 0.0.0.0 -cram 6G

Yes, I can run 2 102400 token streams in parallel and it still uses less RAM than Minimax. Note that Step 3.5 Flash uses SWA to make things more efficient.

Sign up or log in to comment