MiniMax-M2.5 REAP-19 — MLX 4-bit
MLX 4-bit quantized version of Akicou/MiniMax-M2-5-REAP-19 for efficient local inference on Apple Silicon.
- Quantization: 4-bit (5.0 bits per weight, group size 64, affine mode)
- Architecture: MiniMax M2.5 MoE — 62 layers, 205 experts (REAP-pruned from 256), 8 active per token
- Context: 196K tokens
- Size: ~107 GB
- Pruning: 19% of experts removed via REAP (Router Expert Activation Pruning)
Usage
from mlx_lm import load, generate
model, tokenizer = load("shieldstackllc/MiniMax-M2-5-REAP-19-mlx-4bit")
response = generate(model, tokenizer, prompt="Hello!", verbose=True)
Or with vMLX for native macOS inference.
About
MiniMax-M2.5 is a large Mixture-of-Experts language model by MiniMax AI. This variant was pruned to 19% fewer experts by Akicou using REAP (Router Expert Activation Pruning), reducing model size and memory footprint while maintaining strong performance. MLX quantization by vMLX.
Also Available
- MiniMax-M2.5-REAP-19 MLX 8-bit (~193 GB)
- MiniMax-M2.5-REAP-39 MLX 8-bit (~138 GB)
- MiniMax-M2.5-REAP-39 MLX 4-bit (~73 GB)
- MiniMax-M2.5-REAP-29 MLX 4-bit
Made for vMLX
This model was converted and optimized for vMLX — a free, open source macOS native MLX inference engine for Apple Silicon. Download vMLX to run this model locally with zero configuration.
Credits
- Base model: MiniMaxAI/MiniMax-M2.5 by MiniMax AI
- REAP pruning: Akicou/MiniMax-M2-5-REAP-19 by Akicou
- MLX conversion: vMLX — Run AI locally on Mac. No compromises.
Contact
For questions, issues, or collaboration: admin@vmlx.net
- Downloads last month
- 66
Hardware compatibility
Log In
to add your hardware
4-bit