GLM 4.5, 4.6, 4.7 Quality of Life updates

#7
by danielhanchen - opened
Unsloth AI org

We did a refresh of quants (quality of life updates) for GLM 4.5, 4.6 and 4.7

llama.cpp and other inference engines like LM Studio now support more features including but not limited to:

  1. Non ascii decoding for tools (affects non English languages) For eg before the default (ensure_ascii=True) would cause "café" → "caf\u00e9", whilst now ensure_ascii=False would tokenize "café" → "café". I would re-download our quants if you use languages other than English.
  2. Converts reasoning content parsing to original [0], [-1] from our changes of |first and |last. We used to change [0] to |first and [-1] to |last so we be compatible with LM Studio and llama-cli. With the upgrade of llama-cli to use llama-server, we can revert this. llama-server also didn't like |first, so we fixed it as well.

Also other changes:

  1. (Ongoing process) Will add Ollama model files, so Ollama would function.
  2. Added lot of tool calls in our calibration dataset - makes tool calling better, especially for smaller quants.
  3. A bit more calibration data for GLM models., adding a teeny tiny bit more accurancy overall.

GGUFs which will be receive Quality of Life updates:
https://huggingface.co/unsloth/GLM-4.6-GGUF
https://huggingface.co/unsloth/GLM-4.5-GGUF
https://huggingface.co/unsloth/GLM-4.5-Air-GGUF
https://huggingface.co/unsloth/GLM-4.6V-GGUF
https://huggingface.co/unsloth/GLM-4.6V-Flash-GGUF
https://huggingface.co/unsloth/GLM-4.7-GGUF

danielhanchen pinned discussion

Thank you for revisiting these models, much appreciated!

I was wondering if you can help explain (and maybe recommend) which 4bit quant would be the best for my use case. Mac Studio 512GB -> llama-server -> Roo Code

I've got the Q8_K_XL already downloaded, but that model + the MXFP4 version of Qwen-Coder-Next maxes out my memory.

My ideal is that I can have a good accuracy (and performant) quant of GLM4.7, 4.6V, and a few other models loaded all at the same time.

So with that preamble out of the way, I guess I have 3 questions (all for M3 Ultra - and size of the 4bit is not a consideration):

  1. Which 4 bit is the best for Accuracy?
  2. Which is the best for Speed?
  3. Is there a model that offers that best of both for my Apple Silicon use case?
    (Knowing that on your blog, you often say: "We use the UD-Q4_K_XL quant for the best size/accuracy balance")

IQ4_XS
Q4_K_S
IQ4_NL
Q4_0
Q4_1
Q4_K_M
Q4_K_XL
*Lets pretend there is also an MXFP4 so it saves me from asking the same question for another model 😀

@danielhanchen I do hope to get your answer - since commercial LLMs have been useless at answers to this question - ChatGPT, Sonnet, etc.

e.g. asking the same question about quants and MiniMax M2.1 - it says the XL ones are typos.
https://claude.ai/share/438fcc5d-ea95-48e3-9654-5a4ebfa3c3f3

@danielhanchen gentle nudge in case you missed my earlier messages 😀

Sign up or log in to comment