IQ5/IQ6 Quants?

#4
by binahz - opened

GLM-5 is in my opinion, the strongest general-purpose local model at the moment, particularly for non-thinking tasks. From experience, your quant recipe seems to be the best in terms of accuracy. I know the implementation of GLM-5 in llama.cpp is not complete yet (missing DSA/MTP), but it doesn't look like there's any plans to implement these soon.
As such, is there any chance you could create larger IQ quants for this model?

@binahz

Thanks, yes it is a powerful model despite being fairly slow due to lack of some optimization features still and larger A40B parameter size. Makes sense to disable thinking given the rather slow TG performance on most hardware.

I have limited hugging face public quota so am careful not to release as many big models, hence I didn't even cover the ~4bpw as AesSedai has been doing some larger quants with similar MoE optimized recipes as my own for mainline llama.cpp: https://huggingface.co/AesSedai/GLM-5-GGUF . There is a high 4ish bpw quant there that might suit your needs.

Do you have a specific RAM+VRAM cutoff the quants should not exceed? Just curious, but I probably won't get around to it given so many new Qwen models I'm still cleaning up.

Thanks for getting back to me, and yeah, the storage quota sucks. Its really unfortunate, and probably going to get worse once stuff like V4 drops. I'd honestly do this myself but with the BF16 file size plus the quant size, that's already 2TB+ on its own! As for the cutoff, I was thinking 768GB RAM and 24GB VRAM. The reason why is because at the next tier up, 1.5TB, one can host all relevant SOTA (and hopefully upcoming models, finger crossed) models at Q8 already. There also wouldn't really be that much of a speedup for them with lower quants either.

I'd honestly do this myself but with the BF16 file size plus the quant size, that's already 2TB+ on its own!

You could probably just grab the BF16 from here at ~1.52TB: https://huggingface.co/unsloth/GLM-5-GGUF/tree/main/BF16 (if 1.5tb safetensors + 1.5tb gguf was the concern)

I know it may be somewhat late to ask this, but do you have recipes even for some of the larger unreleased quants in your graph? Or perhaps a general heuristic for generating them from the given the other examples in the model card?

I'm guessing this is somewhat close to the recipe for your unreleased IQ4_NL?

custom="
# 79 Repeating Layers [0-78]

## Attention [0-78]
blk\..*\.attn_k_b\.weight=q8_0
blk\..*\.attn_v_b\.weight=q8_0
blk\..*\.attn_kv_a_mqa\.weight=q8_0
blk\..*\.attn_q_a\.weight=q8_0
blk\..*\.attn_q_b\.weight=q8_0
blk\..*\.attn_output\.weight=q8_0

# First 3 Dense Layers [0-2]
blk\..*\.ffn_down\.weight=iq6_k
blk\..*\.ffn_(gate|up)\.weight=iq6_k

# Shared Expert Layers [3-78]
blk\..*\.ffn_down_shexp\.weight=iq6_k
blk\..*\.ffn_(gate|up)_shexp\.weight=iq6_k

# Routed Experts Layers [3-78]
# NOTE: blk.78.* NOT implemented at time of quantizing so no imatrix data available
blk\.(78)\.ffn_down_exps\.weight=iq6_k
blk\.(78)\.ffn_(gate|up)_exps\.weight=iq6_k
blk\..*\.ffn_down_exps\.weight=iq5_ks
blk\..*\.ffn_(gate|up)_exps\.weight=iq4_nl

# Lightning indexer tensors [0-78]
# NOTE: indexer.* NOT implemented at time of quantizing so no imatrix data available
blk\..*\.indexer\.proj\.weight=q8_0
blk\..*\.indexer\.attn_k\.weight=q8_0
blk\..*\.indexer\.attn_q_b\.weight=iq6_k

# NextN MTP Layer [78]
# NOTE: nextn.* NOT implemented at time of quantizing so no imatrix data available
blk\..*\.nextn\.eh_proj\.weight=q8_0

# Non-Repeating Layers
token_embd\.weight=iq5_k
output\.weight=iq6_k
"

And of course IQ4_NL is passed in the actual call to llama-quantize. Am I close?

@mikeroz

Yeah you can just swap out iq4_nl with exact case as you've done and that will be fine.

A few thoughts, I've been experimenting with 32 block quantization types on the Qwen3.5 models e.g. iq4_nl and now also q6_0 given this recent experiment by ik himself: https://github.com/ikawrakow/ik_llama.cpp/issues/1471#issuecomment-4097526398

Also a very nice quality of life feature now available is llama-quantize --dry-run ... which will iterate over everything and catch possible issues due to imatrix as well as show the final size so you can fine tune and tweak your recipe to hit the exact desired target size without as much waiting around.

Here is my exact recipe I used for the tested iq4_nl quant:

#!/usr/bin/env bash

custom="
# 79 Repeating Layers [0-78]

## Attention [0-78]
blk\..*\.attn_k_b\.weight=q8_0
blk\..*\.attn_v_b\.weight=q8_0
blk\..*\.attn_kv_a_mqa\.weight=q8_0
blk\..*\.attn_q_a\.weight=q8_0
blk\..*\.attn_q_b\.weight=q8_0
blk\..*\.attn_output\.weight=q8_0

# First 3 Dense Layers [0-2]
blk\..*\.ffn_down\.weight=iq6_k
blk\..*\.ffn_(gate|up)\.weight=iq6_k

# Shared Expert Layers [3-78]
blk\..*\.ffn_down_shexp\.weight=iq6_k
blk\..*\.ffn_(gate|up)_shexp\.weight=iq6_k

# Routed Experts Layers [3-78]
# NOTE: blk.78.* NOT implemented at time of quantizing so no imatrix data available
blk\.(78)\.ffn_down_exps\.weight=iq6_k
blk\.(78)\.ffn_(gate|up)_exps\.weight=iq6_k
blk\..*\.ffn_down_exps\.weight=iq4_nl
blk\..*\.ffn_(gate|up)_exps\.weight=iq4_nl

# Lightning indexer tensors [0-78]
# NOTE: indexer.* NOT implemented at time of quantizing so no imatrix data available
blk\..*\.indexer\.proj\.weight=q8_0
blk\..*\.indexer\.attn_k\.weight=q8_0
blk\..*\.indexer\.attn_q_b\.weight=q8_0

# NextN MTP Layer [78]
# NOTE: nextn.* NOT implemented at time of quantizing so no imatrix data available
blk\..*\.nextn\.eh_proj\.weight=q8_0

# Non-Repeating Layers
token_embd\.weight=iq6_k
output\.weight=iq6_k
"

custom=$(
  echo "$custom" | grep -v '^#' | \
  sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)

numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
    --custom-q "$custom" \
    --imatrix /mnt/data/models/ubergarm/GLM-5-GGUF/imatrix-GLM-5-BF16.dat \
    /mnt/data/models/ubergarm/GLM-5-GGUF/GLM-256x22B-5-BF16-00001-of-00033.gguf \
    /mnt/data/models/ubergarm/GLM-5-GGUF/GLM-5-smol-IQ4_NL.gguf \
    IQ4_NL \
    128

Sign up or log in to comment