Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
moxin-org
/
MiniMax-M2.1-GGUF
like
6
Follow
Moxin Organization
56
Text Generation
GGUF
MiniMaxAI
MiniMaxM2ForCausalLM
GGUF
llama.cpp
moxin-org
imatrix
conversational
arxiv:
2509.25689
License:
mit
Model card
Files
Files and versions
xet
Community
1
Deploy
Use this model
main
MiniMax-M2.1-GGUF
/
Q4_K_XL
139 GB
Ctrl+K
Ctrl+K
1 contributor
History:
1 commit
bobchenyx
Upload folder using huggingface_hub
7ea59f4
verified
4 months ago
MiniMax-M2.1-Q4_K_XL-00001-of-00006.gguf
24.9 GB
xet
Upload folder using huggingface_hub
4 months ago
MiniMax-M2.1-Q4_K_XL-00002-of-00006.gguf
24.5 GB
xet
Upload folder using huggingface_hub
4 months ago
MiniMax-M2.1-Q4_K_XL-00003-of-00006.gguf
24.5 GB
xet
Upload folder using huggingface_hub
4 months ago
MiniMax-M2.1-Q4_K_XL-00004-of-00006.gguf
24.5 GB
xet
Upload folder using huggingface_hub
4 months ago
MiniMax-M2.1-Q4_K_XL-00005-of-00006.gguf
24.5 GB
xet
Upload folder using huggingface_hub
4 months ago
MiniMax-M2.1-Q4_K_XL-00006-of-00006.gguf
16.3 GB
xet
Upload folder using huggingface_hub
4 months ago