WIP
Make sure to pull and build ik_llama.cpp PR#1268 until it gets merged into main. I'm still fishing for some more good quant recipes and will upload the good ones. Let me know if you have a target RAM+VRAM configuration I'm still fishing for some more good quant recipes and will upload the good ones. Open a discussion if you have a target RAM+VRAM configuration in mind.
ik_llama.cpp imatrix Quantizations of zai-org/GLM-5
NOTE ik_llama.cpp can also run your existing GGUFs from bartowski, unsloth, mradermacher, etc if you want to try it out before downloading my quants.
Some of ik's new quants are supported with Nexesenex/croco.cpp fork of KoboldCPP with Windows builds. Also check for ik_llama.cpp windows builds by Thireus here..
These quants provide best in class perplexity for the given memory footprint.
Big Thanks
Shout out to Wendell and the Level1Techs crew, the community Forums, YouTube Channel! BIG thanks for providing BIG hardware expertise and access to run these experiments and make these great quants available to the community!!!
Also thanks to all the folks in the quanting and inferencing community on BeaverAI Club Discord and on r/LocalLLaMA for tips and tricks helping each other run, test, and benchmark all the fun new models! Thanks to huggingface for hosting all these big quants!
Finally, I really appreciate the support from aifoundry.org so check out their open source RISC-V based solutions!
Quant Collection
Perplexity computed against wiki.test.raw. (lower is "better")
These two are just test quants for baseline perplexity comparison and not available for download here:
BF161404.406 GiB (16.003 BPW)- PPL over 565 chunks for n_ctx=512 = 2.6298 +/- 0.01396
Q8_0746.302 GiB (8.504 BPW)- PPL over 565 chunks for n_ctx=512 = 2.6303 +/- 0.01398
NOTE: The first split file is much smaller on purpose to only contain metadata, its fine!
IQ3_KS 320.216 GiB (3.649 BPW)
PPL over 565 chunks for n_ctx=512 = 2.7839 +/- 0.01508
NOTE: Actual used RAM/VRAM will be about 314.07 GiB despite larger model size reported due to unused blk.78/indexer/nextn tensors.
๐ Secret Recipe
#!/usr/bin/env bash
custom="
# 79 Repeating Layers [0-78]
## Attention [0-78]
blk\..*\.attn_k_b\.weight=q8_0
blk\..*\.attn_v_b\.weight=q8_0
blk\..*\.attn_kv_a_mqa\.weight=q8_0
blk\..*\.attn_q_a\.weight=iq6_k
blk\..*\.attn_q_b\.weight=iq6_k
blk\..*\.attn_output\.weight=iq6_k
# First 3 Dense Layers [0-2]
blk\..*\.ffn_down\.weight=iq5_ks
blk\..*\.ffn_(gate|up)\.weight=iq5_ks
# Shared Expert Layers [3-78]
blk\..*\.ffn_down_shexp\.weight=iq5_ks
blk\..*\.ffn_(gate|up)_shexp\.weight=iq5_ks
# Routed Experts Layers [3-78]
# NOTE: blk.78.* NOT implemented at time of quantizing so no imatrix data available
blk\.(78)\.ffn_down_exps\.weight=iq5_ks
blk\.(78)\.ffn_(gate|up)_exps\.weight=iq5_ks
blk\..*\.ffn_down_exps\.weight=iq4_ks
blk\..*\.ffn_(gate|up)_exps\.weight=iq3_ks
# Lightning indexer tensors [0-78]
# NOTE: indexer.* NOT implemented at time of quantizing so no imatrix data available
blk\..*\.indexer\.proj\.weight=q8_0
blk\..*\.indexer\.attn_k\.weight=q8_0
blk\..*\.indexer\.attn_q_b\.weight=iq6_k
# NextN MTP Layer [78]
# NOTE: nextn.* NOT implemented at time of quantizing so no imatrix data available
blk\..*\.nextn\.eh_proj\.weight=q8_0
# Non-Repeating Layers
token_embd\.weight=iq4_k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/GLM-5-GGUF/imatrix-GLM-5-BF16.dat \
/mnt/data/models/ubergarm/GLM-5-GGUF/GLM-256x22B-5-BF16-00001-of-00033.gguf \
/mnt/data/models/ubergarm/GLM-5-GGUF/GLM-5-IQ3_KS.gguf \
IQ3_KS \
128
smol-IQ2_KS 205.738 GiB (2.344 BPW)
PPL over 565 chunks for n_ctx=512 = 3.7792 +/- 0.02183
NOTE: Actual used RAM/VRAM will be about 200 GiB despite larger model size reported due to unused blk.78/indexer/nextn tensors.
๐ Secret Recipe
#!/usr/bin/env bash
custom="
# 79 Repeating Layers [0-78]
## Attention [0-78]
blk\..*\.attn_k_b\.weight=q8_0
blk\..*\.attn_v_b\.weight=q8_0
blk\..*\.attn_kv_a_mqa\.weight=q8_0
blk\..*\.attn_q_a\.weight=iq6_k
blk\..*\.attn_q_b\.weight=iq6_k
blk\..*\.attn_output\.weight=iq6_k
# First 3 Dense Layers [0-2]
blk\..*\.ffn_down\.weight=iq5_ks
blk\..*\.ffn_(gate|up)\.weight=iq5_ks
# Shared Expert Layers [3-78]
blk\..*\.ffn_down_shexp\.weight=iq5_ks
blk\..*\.ffn_(gate|up)_shexp\.weight=iq5_ks
# Routed Experts Layers [3-78]
# NOTE: blk.78.* NOT implemented at time of quantizing so no imatrix data available
blk\.(78)\.ffn_down_exps\.weight=iq5_ks
blk\.(78)\.ffn_(gate|up)_exps\.weight=iq5_ks
blk\..*\.ffn_down_exps\.weight=iq2_ks
blk\..*\.ffn_(gate|up)_exps\.weight=iq2_ks
# Lightning indexer tensors [0-78]
# NOTE: indexer.* NOT implemented at time of quantizing so no imatrix data available
blk\..*\.indexer\.proj\.weight=q8_0
blk\..*\.indexer\.attn_k\.weight=q8_0
blk\..*\.indexer\.attn_q_b\.weight=iq6_k
# NextN MTP Layer [78]
# NOTE: nextn.* NOT implemented at time of quantizing so no imatrix data available
blk\..*\.nextn\.eh_proj\.weight=q8_0
# Non-Repeating Layers
token_embd\.weight=iq4_k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/GLM-5-GGUF/imatrix-GLM-5-BF16.dat \
/mnt/data/models/ubergarm/GLM-5-GGUF/GLM-256x22B-5-BF16-00001-of-00033.gguf \
/mnt/data/models/ubergarm/GLM-5-GGUF/GLM-5-smol-IQ2_KS.gguf \
IQ2_KS \
128
smol-IQ1_KT 169.190 GiB (1.928 BPW)
PPL over 565 chunks for n_ctx=512 = 4.6032 +/- 0.02768
NOTE: Actual used RAM/VRAM will be about 163.046 GiB despite larger model size reported due to unused blk.78/indexer/nextn tensors.
๐ Secret Recipe
custom="
#!/usr/bin/env bash
# 79 Repeating Layers [0-78]
## Attention [0-78]
blk\..*\.attn_k_b\.weight=q8_0
blk\..*\.attn_v_b\.weight=q8_0
blk\..*\.attn_kv_a_mqa\.weight=q8_0
blk\..*\.attn_q_a\.weight=iq6_k
blk\..*\.attn_q_b\.weight=iq6_k
blk\..*\.attn_output\.weight=iq6_k
# First 3 Dense Layers [0-2]
blk\..*\.ffn_down\.weight=iq5_ks
blk\..*\.ffn_(gate|up)\.weight=iq5_ks
# Shared Expert Layers [3-78]
blk\..*\.ffn_down_shexp\.weight=iq5_ks
blk\..*\.ffn_(gate|up)_shexp\.weight=iq5_ks
# Routed Experts Layers [3-78]
# NOTE: blk.78.* NOT implemented at time of quantizing so no imatrix data available
blk\.(78)\.ffn_down_exps\.weight=iq5_ks
blk\.(78)\.ffn_(gate|up)_exps\.weight=iq5_ks
blk\..*\.ffn_down_exps\.weight=iq1_kt
blk\..*\.ffn_(gate|up)_exps\.weight=iq1_kt
# Lightning indexer tensors [0-78]
# NOTE: indexer.* NOT implemented at time of quantizing so no imatrix data available
blk\..*\.indexer\.proj\.weight=q8_0
blk\..*\.indexer\.attn_k\.weight=q8_0
blk\..*\.indexer\.attn_q_b\.weight=iq6_k
# NextN MTP Layer [78]
# NOTE: nextn.* NOT implemented at time of quantizing so no imatrix data available
blk\..*\.nextn\.eh_proj\.weight=q8_0
# Non-Repeating Layers
token_embd\.weight=iq4_k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/GLM-5-GGUF/imatrix-GLM-5-BF16.dat \
/mnt/data/models/ubergarm/GLM-5-GGUF/GLM-256x22B-5-BF16-00001-of-00033.gguf \
/mnt/data/models/ubergarm/GLM-5-GGUF/GLM-5-smol-IQ1_KT.gguf \
IQ1_KT \
128
Quick Start
# Clone and checkout
$ git clone https://github.com/ikawrakow/ik_llama.cpp
$ cd ik_llama.cpp
# Build for hybrid CPU+CUDA
$ cmake -B build -DCMAKE_BUILD_TYPE=Release -DGGML_CUDA=ON
$ cmake --build build --config Release -j $(nproc)
# Download Quants
$ pip install huggingface_hub
$ hf download --local-dir ./GLM-5-GGUF/ --include=smol-IQ1_KT/*.gguf ubergarm/GLM-5-GGUF
# Hybrid CPU and Single GPU
echo TODO or look at ubergarm/GLM-4.7-GGUF model card quick start for now
# Multi GPU Full Offload
echo TODO or look at ubergarm/GLM-4.7-GGUF model card quick start for now
# CPU-Only
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-server \
--model "$model"\
--alias ubergarm/GLM-5 \
-ger \
--merge-qkv \
--ctx-size 131072 \
-ctk q8_0 \
-mla 3 \
--parallel 1 \
--threads 96 \
--threads-batch 128 \
--numa numactl \
--host 127.0.0.1 \
--port 8080 \
--no-mmap \
--jinja
I tested even the smol-IQ1_KT is working with opencode! You can also bring your own template with --chat-template-file myTemplate.jinja.
References
- Downloads last month
- 282
2-bit
Model tree for ubergarm/GLM-5-GGUF
Base model
zai-org/GLM-5