MiniMax-M2.1-REAP-30-GGUF

This model was converted to GGUF format from 0xSero/MiniMax-M2.1-REAP-30 using GGUF Forge.

Quants

The following quants are available: Q4_K_M, Q5_K_M, Q6_K, Q8_0, Q2_K, Q3_K_L, Q4_K_S, Q3_K_S, Q3_K_M, Q4_0, Q5_0, Q5_K_S

Conversion Stats

Metric Value
Job ID 39aa4cc6-6a81-49d1-bf07-05c57663486f
GGUF Forge Version v6.0
Total Time 5.3h
Avg Time per Quant 45.7min

Step Breakdown

  • Quantization: 5.3h

🚀 Convert Your Own Models

Want to convert more models to GGUF?

👉 gguforge.com — Free hosted GGUF conversion service. Login with HuggingFace and request conversions instantly!

Links

  • 🌐 Free Hosted Service: gguforge.com
  • 🛠️ Self-host GGUF Forge: GitHub
  • 📦 llama.cpp (quantization engine): GitHub
  • 💬 Community & Support: Discord

Converted automatically by GGUF Forge v6.0

Downloads last month
884
GGUF
Model size
162B params
Architecture
minimax-m2
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for goniz/MiniMax-M2.1-REAP-30-GGUF

Quantized
(5)
this model