Mixed Precision GGUF layer quantization of GLM-4.6V-Flash by zai-org

Original model: https://huggingface.co/zai-org/GLM-4.6V-Flash

The hybrid quant employs different quantization levels on a per layer basis to enable both high performance and small file size at the same time. This quant is sized at ~IQ4_XS bpw. The quants employed are all K to avoid slow CPU or older GPU processing of IQ quants. For this file the Q4_P_H layer quants are as follows:

Q4_K_L : Q4_K_M + attn_o = q6_k
Q5_K_L : attn_v = q8_0 attn_o = q6_k ffn_d = q6_k
Q6_K_S : Q6_K

   LAYER_TYPES='[
   [0 ,"Q4_K_M"], [1 ,"Q4_K_S"], [2 ,"Q3_K_L"], [3 ,"Q3_K_M"], [4 ,"Q4_K_L"], [5 ,"Q3_K_M"], [6 ,"Q3_K_L"], [7 ,"Q3_K_M"],
   [8 ,"Q3_K_L"], [9 ,"Q3_K_L"], [10,"Q3_K_L"], [11,"Q3_K_L"], [12,"Q3_K_L"], [13,"Q3_K_L"], [14,"Q3_K_L"], [15,"Q3_K_L"],
   [16,"Q4_K_S"], [17,"Q3_K_L"], [18,"Q4_K_S"], [19,"Q3_K_L"], [20,"Q4_K_S"], [21,"Q3_K_L"], [22,"Q4_K_S"], [23,"Q3_K_L"],
   [24,"Q4_K_S"], [25,"Q4_K_S"], [26,"Q4_K_S"], [27,"Q4_K_S"], [28,"Q4_K_M"], [29,"Q4_K_S"], [30,"Q4_K_M"], [31,"Q4_K_S"],
   [32,"Q4_K_M"], [33,"Q4_K_L"], [34,"Q4_K_M"], [35,"Q4_K_L"], [36,"Q5_K_S"], [37,"Q5_K_M"], [38,"Q5_K_L"], [39,"Q6_K_S"]
   ]'
   FLAGS="--token-embedding-type Q4_K --output-tensor-type Q6_K --layer-types-high --tensor-pad [[13696,13824],[27392,27648,2]] --override-kv glm4.feed_forward_length=int:13824"

Model FFN length is padded from 13696 to 13824 to allow use of K quants in the layers instead of fallback legacy quants.

Comparison:

Quant size PPL Comment
IQ4_XS 5.3e9 11.8 -
Q4_P_H 5.6e9 11.8 Hybrid quant with Q4_K embedding Q6_K output

The quant was sized to be able to run on 8G VRAM GPUs along with the mmproj and then evaluated for acceptable reasoning performance across a curated set of test prompts.

Usage:

This is a RL trained (thinking) vision model. The layer quants for this model were evaluated on a set of test/eval prompts using greedy sampling. The model appears to be very robust against infinite generations on the eval prompts with greedy sampling, always converging to an answer. Image mode was tested on a small set of images and found to be both functional and accurate.

This model will respond in Chinese with no system prompt. For english responses, the following system prompt can be used:

SYSTEM="language = english"

In tests this system prompt will cause the model to both reason and respond in english.

The model can be speculated with Qwen3 0.6B. Approx performance using a downstream speculator with llama.cpp on one 4070 (12G VRAM) GPU with fixed spec block length ND:

ND QKV NKV gen tps Comment
3 F16 33k 76 llama.cpp b7845
0 F16 128k 67 ""
3 Q8_0 38k 81 ""
0 Q8_0 128k 68 ""

Speculation is of marginal benefit due to the difficulty of speculating RL models which generate reflections at unpredictable times.

Benchmarks:

Vision benchmarks for the model are given here: https://huggingface.co/spaces/steampunque/benchlm

Download the file from below:

Link Type Size/e9 B Notes
GLM-4.6V-Flash.Q4_P_H.gguf Q4_P_H 5.6e9 B ~0.3B bigger than IQ4_XS
GLM-4.6V-Flash.mmproj.gguf F16 1.8e9 B multimedia projector

A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:

https://github.com/ggml-org/llama.cpp/discussions/13040

Downloads last month
339
GGUF
Model size
9B params
Architecture
glm4
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for steampunque/GLM-4.6V-Flash-MP-GGUF

Quantized
(41)
this model