π₯ GGUF Quantizations of MiniMax-M2.7
Quantized using llama.cpp from BF16 source weights.
Original model: MiniMaxAI/MiniMax-M2.7
Run them in LM Studio or directly with llama.cpp.
Download a file from below
| Filename | Quant type | File Size | Description |
|---|---|---|---|
| MiniMax-M2.7-BF16.gguf | BF16 | ~427 GB | Full BF16 weights. Use for re-quantizing or max quality. |
| MiniMax-M2.7-Q8_0.gguf | Q8_0 | ~243 GB | Extremely high quality, generally unneeded but max available. |
| MiniMax-M2.7-Q6_K.gguf | Q6_K | ~188 GB | Very high quality, near perfect, recommended. |
| MiniMax-M2.7-Q5_K_M.gguf | Q5_K_M | ~162 GB | High quality, recommended. |
| MiniMax-M2.7-Q4_K_M.gguf | Q4_K_M | ~138 GB | Good quality, default size for most use cases, recommended. |
| MiniMax-M2.7-Q3_K_M.gguf | Q3_K_M | ~109 GB | Lower quality but usable, good for tight hardware. |
| MiniMax-M2.7-Q2_K.gguf | Q2_K | ~83 GB | Low quality, only for extreme memory constraints. |
Downloading
pip install -U "huggingface_hub[cli]"
huggingface-cli download dennny123/MiniMax-M2.7-GGUF --include "MiniMax-M2.7-Q4_K_M*" --local-dir ./
For split files (>50GB):
huggingface-cli download dennny123/MiniMax-M2.7-GGUF --include "MiniMax-M2.7-Q8_0/*" --local-dir ./
Running the model
llama.cpp
./llama-cli -m MiniMax-M2.7-Q4_K_M.gguf -ngl 99 -cnv -p "You are a helpful assistant."
Ollama
ollama run hf.co/dennny123/MiniMax-M2.7-GGUF:Q4_K_M
LM Studio
Search for dennny123/MiniMax-M2.7-GGUF in the model browser.
Which file should I choose?
| Have this much memory | Use this quant |
|---|---|
| 256GB+ | Q8_0 or Q6_K |
| 192GB | Q5_K_M |
| 144GB | Q4_K_M (most popular) |
| 112GB | Q3_K_M |
| 96GB | Q2_K |
MiniMax-M2.7 is a Mixture-of-Experts model (229B total, ~10B active per token). All 229B parameters must be loaded into memory even though only a fraction are active per token. Size your hardware by total parameter count.
Quantization details
- llama.cpp: Latest main branch
- Conversion: BF16 GGUF intermediate, quantized in second pass
- Hardware: NVIDIA GH200 96GB + 525GB RAM
About MiniMax-M2.7
MiniMax-M2.7 is a 229B parameter MoE model (10B active) built for coding and agentic workflows.
- SWE-Pro: 56.22% (matches GPT-5.3-Codex)
- VIBE-Pro: 55.6%
- Terminal Bench 2: 57.0%
- GDPval-AA: ELO 1495 (highest open-source, surpasses GPT-5.3)
- MLE Bench Lite: 66.6% medal rate
Recommended inference parameters: temperature=1.0, top_p=0.95, top_k=40
See the official model card for full details.
- Downloads last month
- -
Hardware compatibility
Log In to add your hardware
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Model tree for dennny123/MiniMax-M2.7-GGUF
Base model
MiniMaxAI/MiniMax-M2.7