Kimi K2.5 optimized to run comfortably on a Mac Studio M3 512G.
Other MLX options require 450G+, which is tight even with 500G of usable memory. This quant fits into ~380G with room to spare, giving you the flexibility to use longer contexts, run other models in parallel, and open up 3 browser tabs without OOM'ing.
If you're looking to use Kimi K2.5 as the core of a "Claude Code in a box" setup, you've come to the right place.
Update: Uploaded a v2 that improves perplexity while keeping the same size.
Update: Created an even smaller 2.5 bit version that uses less memory while maintaining the same perplexity as v1!
Usage
# Start server at http://localhost:8080/v1/chat/completions
uvx --from mlx-lm --with tiktoken \
mlx_lm.server \
--host 127.0.0.1 --port 8080 \
--trust-remote-code \
--model spicyneuron/Kimi-K2.5-MLX-mixed-2.8-bit
# Kimi K2.5 requires tiktoken + remote code for the tokenizer
Methodology
Quantized using a custom script inspired by Unsloth/AesSedai/ubergarm mixed-precision GGUFs. MLX quantization options differ than llama.cpp, but the principles are the same:
- Sensitive layers like MoE routing, attention, and output embeddings get higher precision (BF16, 8, 4)
- More tolerant layers like MoE experts get lower precision (2, 3)
This one is comparable to Unsloth's UD-Q2_K_XL in size, but loads and runs noticeably faster thanks to MLX. Compared to the 3 bit MLX, it's faster, uses 80G less memory, yet has lower perplexity.
Performance
| Prompt Size | GGUF | MLX 3 bit | MLX 2.8 bit v1 | MLX 2.8 bit v2 | MLX 2.5 bit |
|---|---|---|---|---|---|
| 1000 | 148.82 | 216.976 | 224.878 | 224.094 | 226.368 |
| 5000 | 130.90 | 230.227 | 235.595 | 231.966 | 237.426 |
| 10000 | 113.32 | 219.792 | 222.464 | 218.455 | 223.846 |
| 20000 | 89.72 | 186.549 | 187.915 | 186.169 | 188.502 |
| Gen Size | GGUF | MLX 3 bit | MLX 2.8 bit v1 | MLX 2.8 bit v2 | MLX 2.5 bit |
|---|---|---|---|---|---|
| 500 | 23.38 | 25.781 | 27.443 | 26.586 | 27.571 |
| 1000 | 22.37 | 25.210 | 26.491 | 24.285 | 26.853 |
| 2000 | 21.89 | 23.944 | 24.573 | 22.603 | 24.689 |
| 5000 | 20.52 | 20.758 | 21.030 | 20.499 | 21.192 |
Perplexity (MLX quants)
| Model | Perplexity | Relative | Relative % |
|---|---|---|---|
| MLX 3 bit | 3.798 ± 0.021 | — | — |
| MLX 2.8 bit v1 | 3.768 ± 0.021 | -0.030 | -0.79% |
| MLX 2.8 bit v2 | 3.702 ± 0.020 | -0.096 | -2.53% |
| MLX 2.5 bit | 3.777 ± 0.020 | -0.021 | -0.55% |
# llama.cpp 8130
llama-bench -fa 1 --batch-size 2048 --ubatch-size 2048 --repetitions 5
# mlx_lm v0.30.7
mlx_lm.benchmark --num-trials 5
mlx_lm.perplexity --sequence-length 1000 --seed 222
- Downloads last month
- 1,118
4-bit
Model tree for spicyneuron/Kimi-K2.5-MLX-mixed-2.8-bit
Base model
moonshotai/Kimi-K2.5