MiniMax-M2.1-REAP-40-GGUF
This model was converted to GGUF format from 0xSero/MiniMax-M2.1-REAP-40 using GGUF Forge.
Quants
The following quants are available: Q2_K, Q3_K_S, Q3_K_M, Q3_K_L, Q4_0, Q4_K_S, Q4_K_M, Q5_0, Q5_K_S, Q5_K_M, Q6_K, Q8_0
Conversion Stats
| Metric | Value |
|---|---|
| Job ID | 7a1af7fe-6ef1-412e-b801-745e4bdbc9b0 |
| GGUF Forge Version | v3.6 |
| Total Time | 15.1h |
| Avg Time per Quant | 15.8min |
Step Breakdown
- Download: 26.3min
- FP16 Conversion: 22.1min
- Quantization: 14.1h
π Convert Your Own Models
Want to convert more models to GGUF?
π gguforge.com β Free hosted GGUF conversion service. Login with HuggingFace and request conversions instantly!
Links
- π Free Hosted Service: gguforge.com
- π οΈ Self-host GGUF Forge: GitHub
- π¦ llama.cpp (quantization engine): GitHub
- π¬ Community & Support: Discord
Converted automatically by GGUF Forge v3.6
- Downloads last month
- 339
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for Akicou/MiniMax-M2.1-REAP-40-GGUF
Base model
MiniMaxAI/MiniMax-M2.1