File size: 2,258 Bytes
ffc0556
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4dfcc26
 
 
 
 
 
ffc0556
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
---
language:
- en
license: mit
pipeline_tag: text-generation
tags:
- mlx
- mixture-of-experts
- moe
- pruning
- reap
- minimax
- 8bit
- quantized
- apple-silicon
library_name: mlx
base_model: Akicou/MiniMax-M2-5-REAP-39
---

<p align="center">
  <a href="https://vmlx.net">
    <img src="vmlx-logo.png" alt="vMLX" width="120">
  </a>
</p>

# MiniMax-M2.5 REAP-39 — MLX 8-bit

MLX 8-bit quantized version of [Akicou/MiniMax-M2-5-REAP-39](https://huggingface.co/Akicou/MiniMax-M2-5-REAP-39) for efficient local inference on Apple Silicon.

- **Quantization**: 8-bit (8.5 bits per weight, group size 64, affine mode)
- **Architecture**: MiniMax M2.5 MoE — 62 layers, 154 experts (REAP-pruned from 256), 8 active per token
- **Context**: 196K tokens
- **Size**: ~138 GB
- **Pruning**: 39% of experts removed via [REAP](https://github.com/CerebrasResearch/reap) (Router Expert Activation Pruning)

## Usage

```python
from mlx_lm import load, generate

model, tokenizer = load("shieldstackllc/MiniMax-M2-5-REAP-39-mlx-8bit")
response = generate(model, tokenizer, prompt="Hello!", verbose=True)
```

Or with [vMLX](https://vmlx.net) for native macOS inference.

## About

MiniMax-M2.5 is a large Mixture-of-Experts language model by MiniMax AI. This variant was pruned to 39% fewer experts by [Akicou](https://huggingface.co/Akicou) using REAP (Router Expert Activation Pruning), reducing model size and memory footprint while maintaining strong performance. MLX quantization by [vMLX](https://vmlx.net).

## Also Available

- [MiniMax-M2.5-REAP-39 MLX 4-bit](https://huggingface.co/shieldstackllc/MiniMax-M2-5-REAP-39-mlx-4bit) (~73 GB)

## Made for vMLX

This model was converted and optimized for [vMLX](https://vmlx.net) — a free, open source macOS native MLX inference engine for Apple Silicon. Download vMLX to run this model locally with zero configuration.

## Credits

- **Base model**: [MiniMaxAI/MiniMax-M2.5](https://huggingface.co/MiniMaxAI/MiniMax-M2.5) by MiniMax AI
- **REAP pruning**: [Akicou/MiniMax-M2-5-REAP-39](https://huggingface.co/Akicou/MiniMax-M2-5-REAP-39) by Akicou
- **MLX conversion**: [vMLX](https://vmlx.net) — Run AI locally on Mac. No compromises.

## Contact

For questions, issues, or collaboration: **admin@vmlx.net**