GLM-4.7
Collection
3 items • Updated
FP8 W8A8 quantized version of trohrbaugh/GLM-4.7-heretic — a decensored zai-org/GLM-4.7, made using Heretic v1.2.0+custom.
| Parameter | Value |
|---|---|
| direction_index | per layer |
| attn.o_proj.max_weight | 1.84 |
| attn.o_proj.max_weight_position | 49.16 |
| attn.o_proj.min_weight | 1.64 |
| attn.o_proj.min_weight_distance | 26.42 |
| mlp.down_proj.max_weight | 1.02 |
| mlp.down_proj.max_weight_position | 53.46 |
| mlp.down_proj.min_weight | 0.97 |
| mlp.down_proj.min_weight_distance | 45.98 |
| Metric | This model | Original model (zai-org/GLM-4.7) |
|---|---|---|
| KL divergence | 0.0748 | 0 (by definition) |
| Refusals | 0/100 | 99/100 |
Quantized using llm-compressor (v0.10.1-dev, main branch) to produce a compressed-tensors format checkpoint natively supported by vLLM — no --quantization flag or patches needed.
compressed-tensors (auto-detected by vLLM)| Component | Precision | Rationale |
|---|---|---|
| Routed expert weights (160 experts × 89 MoE layers) | FP8 E4M3 | Bulk of model — per-channel static scaling via calibration |
| Attention projections (q/k/v/o) | FP8 E4M3 | GQA with 96Q / 8KV heads, head_dim=128 |
| Shared expert weights | FP8 E4M3 | Active every token, well-calibrated |
| Dense MLP (layers 0–2) | FP8 E4M3 | Only 3 dense layers |
| Attention biases (q/k/v) | BF16 | Small tensors, sensitive to precision loss |
| Router/gate weights | BF16 | Routing errors cascade through all downstream computation |
| MoE e_score_correction_bias | BF16 | Critical for expert load balancing |
| RMSNorm / QK norms | BF16 | Negligible size, high sensitivity |
| Embeddings / LM head | BF16 | Standard practice for quantized models |
| MTP head (layer 92: enorm, hnorm, eh_proj) | BF16 | Speculative decoding head, kept full precision |
IGNORE_PATTERNS = [
"re:.*embed_tokens.*",
"lm_head",
"re:.*layernorm.*",
"re:.*q_norm.*",
"re:.*k_norm.*",
"model.norm",
"re:.*self_attn\\.q_proj\\.bias",
"re:.*self_attn\\.k_proj\\.bias",
"re:.*self_attn\\.v_proj\\.bias",
"re:.*mlp\\.gate$",
"re:.*mlp\\.gate\\.weight",
"re:.*mlp\\.gate\\.e_score_correction_bias",
"re:.*\\.enorm",
"re:.*\\.hnorm",
"re:.*\\.eh_proj",
"re:.*shared_head\\.norm",
]
vLLM auto-detects the compressed-tensors FP8 format from config.json. No --quantization flag required.
vllm serve trohrbaugh/GLM-4.7-heretic-fp8 \
--tensor-parallel-size 4 \
--max-model-len 131072 \
--tool-call-parser glm47 \
--reasoning-parser glm45 \
--enable-auto-tool-choice
To disable thinking mode (shorter, faster responses):
vllm serve trohrbaugh/GLM-4.7-heretic-fp8 \
--tensor-parallel-size 4 \
--max-model-len 131072 \
--tool-call-parser glm47 \
--reasoning-parser glm45 \
--enable-auto-tool-choice \
--default-chat-template-kwargs '{"enable_thinking": false}'
Or disable per-request:
{
"model": "trohrbaugh/GLM-4.7-heretic-fp8",
"messages": [{"role": "user", "content": "Hello"}],
"chat_template_kwargs": {"enable_thinking": false}
}
| Configuration | Approx. VRAM | Example hardware |
|---|---|---|
| TP=4 | ~370 GB | 4× H100 80GB, 4× RTX PRO 6000 96GB |
| TP=8 | ~370 GB | 8× A100 80GB, 8× RTX PRO 6000 96GB |
| Variant | Size | Format | Link |
|---|---|---|---|
| BF16 (full precision) | ~706 GB | safetensors | trohrbaugh/GLM-4.7-heretic |
| FP8 W8A8 (this model) | ~362 GB | compressed-tensors | trohrbaugh/GLM-4.7-heretic-fp8 |
@misc{5team2025glm45agenticreasoningcoding,
title={GLM-4.5: Agentic, Reasoning, and Coding (ARC) Foundation Models},
author={GLM Team and Aohan Zeng and Xin Lv and others},
year={2025},
eprint={2508.06471},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2508.06471},
}
Base model
zai-org/GLM-4.7