GLM-4.7-Flash — 19GB (MLX)

Mixed-precision quantized version of zai-org/GLM-4.7-Flash optimised by baa.ai using a proprietary Black Sheep AI method.

Per-tensor bit-width allocation via advanced sensitivity analysis and budget-constrained optimisation — no calibration data required.

Metrics

Metric Value
Size 19 GB
Average bits 5.1
WikiText-2 PPL (median) 8.7520
PPL vs BF16 +10.5%
MMLU vs BF16 107.3% of BF16

Usage

from mlx_lm import load, generate

model, tokenizer = load("baa-ai/GLM-4.7-Flash-RAM-20GB-MLX")
response = generate(model, tokenizer, prompt="Hello!", max_tokens=256)
print(response)

Quantized by baa.ai

Downloads last month
545
Safetensors
Model size
30B params
Tensor type
BF16
·
F32
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for baa-ai/GLM-4.7-Flash-RAM-20GB-MLX

Quantized
(78)
this model

Collection including baa-ai/GLM-4.7-Flash-RAM-20GB-MLX