vMLX

GLM-4.7-Flash-PRISM โ€” MLX 4-bit

MLX 4-bit quantized version of Ex0bit/GLM-4.7-Flash-PRISM for efficient local inference on Apple Silicon.

  • Quantization: 4-bit (4.5 bits per weight, group size 64, affine mode)
  • Architecture: GLM-4 MoE Lite โ€” 47 layers, 64 routed experts, 4 active per token
  • Context: 202K tokens
  • Size: ~16 GB

Usage

from mlx_lm import load, generate

model, tokenizer = load("shieldstackllc/GLM-4.7-Flash-PRISM-mlx-4bit")
response = generate(model, tokenizer, prompt="Hello!", verbose=True)

Or with vMLX for native macOS inference.

About

This model is an abliterated (uncensored) variant of GLM-4.7-Flash, a Mixture-of-Experts language model by Zhipu AI / THUDM. The abliteration was done by Ex0bit as part of the PRISM series. MLX quantization by vMLX.

Also Available

Made for vMLX

This model was converted and optimized for vMLX โ€” a free, open source macOS native MLX inference engine for Apple Silicon. Download vMLX to run this model locally with zero configuration.

Credits

Contact

For questions, issues, or collaboration: admin@vmlx.net

Downloads last month
101
Safetensors
Model size
30B params
Tensor type
BF16
ยท
U32
ยท
F32
ยท
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for shieldstackllc/GLM-4.7-Flash-PRISM-mlx-4bit

Quantized
(2)
this model