Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
majentik
/
Leanstral-RotorQuant-MLX-4bit
like
0
MLX
Safetensors
mistral3
rotorquant
kv-cache-quantization
4-bit precision
weight-quantization
leanstral
lean4
formal-proofs
theorem-proving
quantized
apple-silicon
mistral
Mixture of Experts
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Use this model
main
Leanstral-RotorQuant-MLX-4bit
/
.gitattributes
Commit History
Add MLX quantized model weights
6671288
verified
majentik
commited on
3 days ago
initial commit
17dfed2
verified
majentik
commited on
3 days ago