Hot Damn This Model Cooks!

#5
by aaron-newsome - opened

I was already pretty impressed by M2, and I ran it under HEAVY usage running locally. It probably knocked out a few dozen commits in a few fairly complex projects. Once I began using M2, I never fired up glm-4.5 again. There was a brief window of a few days when GLM 4.7 dropped, I ran it, I liked it. It's much slower on my hardware than Minimax.

I've been running M2.1 since @unsloth dropped these quants and man does it cook! Maybe GLM 4.7 might be better at some things, speed is important too when working through so many issues in code. Yes I like GLM 4.7 but it's more likely my setup will be stuck on M2.1 until something better comes along.

As always, thanks to the Unsloth team for getting these quants out quickly. I appreciate all that you guys do. Btw, I'm using opencode.

what quant are you using?

i used the Q8 KXL for a solid few days. now trying the Q6 to see how it performs.

please tell us how do you find Q6 vs Q8

Cerebras provided REAP for minimax m2, than I think they will do similar thing for minimax m2.1. Waiting for it.

i tried the REAP minimax too @puchuu but it didn't work well for me. The model would frequently start to feel really dumb when working through complex coding issues. The unsloth GGUF have been much more reliable for me.

As far as the Q8 vs Q6, I haven't done any A/B testing on exact coding challenges with exact same prompts. The typical one-shot tests that you see reviewers doing aren't really useful to me. Most models do pretty good with the one-shot prompts. The real test is when you let it loose in an existing codebase and you're trying to fix issues, add new features, refine workflows, etc. The Q8 is a REALLY tight fit on my setup until I get the 4th GPU installed in the system. All I can say is when I'm running the Q6 and it stumbles on a fix or makes a non-optimal edit, I think to myself, would the Q8 have made the mistake?

Once I get the 4th GPU in the system, it's likely I won't run GGUF at all and I'll just run the safetensors version. Until then, I'm keeping the Q8 loaded and working HEAVILY.

Unsloth AI org

Thanks for trying them out guys @aaron-newsome ! <3 Always love reading people saying that the quants are great! And ofc thanks to Minimax team for releasing them! :)

Hi,

I am also trying to experiment with this model but i am not quiet convince about it using the mention settings witch are: temp 1.0, top-p 0.95, top-k 40.

I was wondering if that was the settings you use or maybe you have a little tweak ?
For some tasks i needed to add a repeat-penalty value of 1.05 to avoid endless response.

Yes, I do have repeat penalty set to 1.05. With Opencode, I never see the model repeat itself. I tried a few different ways of keeping the model on track for longer tasks, speckit, openspec, i'm currently really liking beads (bd) for task management because most of my coding is done on existing codebases. It's not often I'll be creating new apps from scratch. My llama.cpp startup script varies from day to day but it's just me trying to tweak everything possible to get the best, most stable and fastest setup. The unsloth guides are nice but I feel like they leave some optimal settings out. I've also seen some speedups by tweaking the llama.cpp compile flags, something I never would have found without experimentation. Currently, the compile flags in my docker file are

RUN pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu129
RUN pip3 install ninja; MAX_JOBS=$(($(nproc)/2)) TORCH_CUDA_ARCH_LIST="12.9" pip3 install flash-attn==2.8.3 --no-build-isolation
ENV LD_LIBRARY_PATH="/llama.cpp/build/bin:/usr/local/cuda-12.8/compat/:$LD_LIBRARY_PATH"
RUN cmake llama.cpp -B llama.cpp/build -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON -DGGML_CUDA_FA_ALL_QUANTS=ON -DLLAMA_FLASH_ATTN=ON -DGGML_CUDA_FORCE_MMQ=ON -DCMAKE_PREFIX_PATH=/usr/local/python3.12.11/lib/python3.12/site-packages/flash_attn -DCUDA_ARCH_LIST="12.9" -DCMAKE_CXX_FLAGS="-march=x86-64-v3 -O3" -DGGML_CUDA_GRAPHS=ON -DCMAKE_INTERPROCEDURAL_OPTIMIZATION=ON
RUN cmake --build llama.cpp/build --config Release -j --clean-first --target llama-cli llama-gguf-split llama-server

my startup script is exactly as follows:

llama-server \
  --model /mnt/data/models/MiniMax-M2.1-UD-Q8_K_XL/MiniMax-M2.1-UD-Q8_K_XL-00001-of-00006.gguf \
  --host 0.0.0.0 \
  --alias minimax-m2 \
  --n-gpu-layers -1 \
  --ctx-size 131072 \
  --cache-ram 4096 \
  --threads 8 \
  --tensor-split 32,34,34 \
  --temp 1.0 \
  --min-p 0.0 \
  --top-p 0.95 \
  --top-k 40 \
  --repeat-penalty 1.05 \
  --ctx-checkpoints 2 \
  --reasoning-format auto \
  --flash-attn on \
  --cache-type-k q8_0 \
  --cache-type-v q8_0 \
  --batch-size 4096 \
  --ubatch-size 2048 \
  --cont-batching \
  --jinja 

I do make changes but this is the current setup. Nothing special in Opencode config.

Sign up or log in to comment