GLM-5.1-FP8_dq4

This model is a DQ4 quantized version of the original model [GLM-5.1-FP8](Local Model). It was quantized locally using the mlx_lm library.

Quantization Methodology (DQ4)

This model was quantized using the dynamic DQ4 (4-bit / 5-bit / 6-bit / 8-bit mixed) approach, inspired by the methodology described in the mlx-community/Kimi-K2.5-mlx-DQ3_K_M-q8 repository.

The weights are mixed based on MLX layers:

  • Expert layers (switch_mlp / mlp) are quantized to 4-bit.
  • The first 5 layers are kept at higher quality (6-bit).
  • Every 5th layer is medium quality (5-bit).
  • All other layers (e.g. attention, normalization) remain at 8-bit to serve as the "8-bit brain".
Downloads last month
1,260
Safetensors
Model size
744B params
Tensor type
BF16
U32
F32
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support