GLM-4.7-Flash APEX GGUF

APEX (Adaptive Precision for EXpert Models) quantizations of GLM-4.7-Flash.

Brought to you by the LocalAI team | APEX Project | Technical Report

Benchmark Results

Benchmarks coming soon. For reference APEX benchmarks on the Qwen3.5-35B-A3B architecture, see mudler/Qwen3.5-35B-A3B-APEX-GGUF.

Available Files

File Profile Size Best For
GLM-4.7-Flash-APEX-I-Balanced.gguf I-Balanced 21 GB Best overall quality/size ratio
GLM-4.7-Flash-APEX-I-Quality.gguf I-Quality 18 GB Highest quality with imatrix
GLM-4.7-Flash-APEX-Quality.gguf Quality 18 GB Highest quality standard
GLM-4.7-Flash-APEX-Balanced.gguf Balanced 21 GB General purpose
GLM-4.7-Flash-APEX-I-Compact.gguf I-Compact 14 GB Consumer GPUs, best quality/size
GLM-4.7-Flash-APEX-Compact.gguf Compact 14 GB Consumer GPUs
GLM-4.7-Flash-APEX-I-Mini.gguf I-Mini 12 GB Smallest viable, fastest inference

What is APEX?

APEX is a quantization strategy for Mixture-of-Experts (MoE) models. It classifies tensors by role (routed expert, shared expert, attention) and applies a layer-wise precision gradient -- edge layers get higher precision, middle layers get more aggressive compression. I-variants use diverse imatrix calibration (chat, code, reasoning, tool-calling, agentic traces, Wikipedia).

See the APEX project for full details, technical report, and scripts.

Architecture

  • Model: GLM-4.7-Flash (Glm4MoeLite)
  • Layers: 47 (1 dense + 46 MoE)
  • Experts: 64 routed + 1 shared (4 active per token)
  • Total Parameters: ~30B
  • Attention: Multi-head Latent Attention (MLA, DeepSeek-V2 style)
  • APEX Config: 5+5 symmetric edge gradient across 47 layers, MLA-aware tensor mapping

Run with LocalAI

local-ai run mudler/GLM-4.7-Flash-APEX-GGUF@GLM-4.7-Flash-APEX-I-Balanced.gguf

Credits

APEX is brought to you by the LocalAI team. Developed through human-driven, AI-assisted research. Built on llama.cpp.

Downloads last month
3,855
GGUF
Model size
30B params
Architecture
deepseek2
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mudler/GLM-4.7-Flash-APEX-GGUF

Quantized
(76)
this model

Collection including mudler/GLM-4.7-Flash-APEX-GGUF