MiniMax-M2.5 APEX GGUF

APEX (Adaptive Precision for EXpert Models) quantizations of MiniMax-M2.5.

Brought to you by the LocalAI team | APEX Project | Technical Report

Benchmark Results

Benchmarks coming soon. For reference APEX benchmarks on the Qwen3.5-35B-A3B architecture, see mudler/Qwen3.5-35B-A3B-APEX-GGUF.

Available Files

File Profile Size Best For
MiniMax-M2.5-APEX-I-Balanced.gguf I-Balanced 155 GB Best overall quality/size ratio
MiniMax-M2.5-APEX-I-Quality.gguf I-Quality 130 GB Highest quality with imatrix
MiniMax-M2.5-APEX-Quality.gguf Quality 130 GB Highest quality standard
MiniMax-M2.5-APEX-Balanced.gguf Balanced 155 GB General purpose
MiniMax-M2.5-APEX-I-Compact.gguf I-Compact 100 GB Multi-GPU setups, best quality/size
MiniMax-M2.5-APEX-Compact.gguf Compact 100 GB Multi-GPU setups
MiniMax-M2.5-APEX-I-Mini.gguf I-Mini 81 GB Smallest viable

What is APEX?

APEX is a quantization strategy for Mixture-of-Experts (MoE) models. It classifies tensors by role (routed expert, shared expert, attention) and applies a layer-wise precision gradient -- edge layers get higher precision, middle layers get more aggressive compression. I-variants use diverse imatrix calibration (chat, code, reasoning, tool-calling, agentic traces, Wikipedia).

See the APEX project for full details, technical report, and scripts.

Architecture

  • Model: MiniMax-M2.5 (MiniMaxM2)
  • Layers: 62
  • Experts: 256 routed + 1 shared (8 active per token)
  • Total Parameters: 228.7B
  • Active Parameters: ~45B per token
  • APEX Config: 5+5 symmetric edge gradient across 62 layers
  • Calibration: v1.3 diverse dataset (chat, code, reasoning, multilingual, tool-calling, Wikipedia)

Run with LocalAI

local-ai run mudler/MiniMax-M2.5-APEX-GGUF@MiniMax-M2.5-APEX-I-Balanced.gguf

Credits

APEX is brought to you by the LocalAI team. Developed through human-driven, AI-assisted research. Built on llama.cpp.

Downloads last month
4,150
GGUF
Model size
229B params
Architecture
minimax-m2
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for mudler/MiniMax-M2.5-APEX-GGUF

Quantized
(66)
this model

Collection including mudler/MiniMax-M2.5-APEX-GGUF