Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
majentik
/
Mistral-Small-4-119B-RotorQuant-MLX-2bit
like
0
Text Generation
MLX
Safetensors
mistral3
rotorquant
kv-cache-quantization
mistral
Mixture of Experts
sparse-moe
multimodal
quantized
2-bit
apple-silicon
256k-context
thinking
conversational
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Use this model
main
Mistral-Small-4-119B-RotorQuant-MLX-2bit
/
tokenizer_config.json
Commit History
Add MLX quantized model weights
ba08f9e
verified
majentik
commited on
2 days ago