Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
moxin-org
/
MiniMax-M2-GGUF
like
6
Follow
Moxin Organization
56
Text Generation
GGUF
MiniMaxAI
MiniMaxM2ForCausalLM
GGUF
llama.cpp
moxin-org
imatrix
conversational
arxiv:
2509.25689
License:
mit
Model card
Files
Files and versions
xet
Community
2
Deploy
Use this model
main
MiniMax-M2-GGUF
/
Moxin-Q4_K_XL
139 GB
Ctrl+K
Ctrl+K
1 contributor
History:
11 commits
bobchenyx
Delete Moxin-Q4_K_XL/GLM-4.6-Q4_K_XL-00009-of-00009.gguf
a290c41
verified
5 months ago
MiniMax-M2-Q4_K_XL-00001-of-00006.gguf
24.9 GB
xet
Upload folder using huggingface_hub
5 months ago
MiniMax-M2-Q4_K_XL-00002-of-00006.gguf
24.5 GB
xet
Upload folder using huggingface_hub
5 months ago
MiniMax-M2-Q4_K_XL-00003-of-00006.gguf
24.5 GB
xet
Upload folder using huggingface_hub
5 months ago
MiniMax-M2-Q4_K_XL-00004-of-00006.gguf
24.5 GB
xet
Upload folder using huggingface_hub
5 months ago
MiniMax-M2-Q4_K_XL-00005-of-00006.gguf
24.5 GB
xet
Upload folder using huggingface_hub
5 months ago
MiniMax-M2-Q4_K_XL-00006-of-00006.gguf
16.3 GB
xet
Upload folder using huggingface_hub
5 months ago