Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
mlx-community
/
MiniMax-M2.1-3bit
like
3
Follow
MLX Community
7.86k
Text Generation
MLX
Safetensors
Transformers
minimax
conversational
custom_code
3-bit
License:
modified-mit
Model card
Files
Files and versions
xet
Community
1
Use this model
New discussion
New pull request
Resources
PR & discussions documentation
Code of Conduct
Hub documentation
All
Discussions
Pull requests
View closed (0)
Sort: Recently created
Anyone running this with M4 Max 128gb? How does it compare to 4bit quantization?
1
#1 opened about 11 hours ago by
tumma72