vMLX

MiniMax-M2.5 REAP-19 — MLX 4-bit

MLX 4-bit quantized version of Akicou/MiniMax-M2-5-REAP-19 for efficient local inference on Apple Silicon.

  • Quantization: 4-bit (5.0 bits per weight, group size 64, affine mode)
  • Architecture: MiniMax M2.5 MoE — 62 layers, 205 experts (REAP-pruned from 256), 8 active per token
  • Context: 196K tokens
  • Size: ~107 GB
  • Pruning: 19% of experts removed via REAP (Router Expert Activation Pruning)

Usage

from mlx_lm import load, generate

model, tokenizer = load("shieldstackllc/MiniMax-M2-5-REAP-19-mlx-4bit")
response = generate(model, tokenizer, prompt="Hello!", verbose=True)

Or with vMLX for native macOS inference.

About

MiniMax-M2.5 is a large Mixture-of-Experts language model by MiniMax AI. This variant was pruned to 19% fewer experts by Akicou using REAP (Router Expert Activation Pruning), reducing model size and memory footprint while maintaining strong performance. MLX quantization by vMLX.

Also Available

Made for vMLX

This model was converted and optimized for vMLX — a free, open source macOS native MLX inference engine for Apple Silicon. Download vMLX to run this model locally with zero configuration.

Credits

Contact

For questions, issues, or collaboration: admin@vmlx.net

Downloads last month
66
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for shieldstackllc/MiniMax-M2-5-REAP-19-mlx-4bit

Quantized
(2)
this model