Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
majentik
/
Leanstral-TurboQuant-MLX-8bit
like
0
Text Generation
MLX
Safetensors
mistral3
turboquant
kv-cache-quantization
8bit
weight-quantization
leanstral
lean4
formal-proofs
theorem-proving
quantized
apple-silicon
mistral
Mixture of Experts
8-bit precision
arxiv:
2504.19874
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Use this model
New discussion
New pull request
Resources
PR & discussions documentation
Code of Conduct
Hub documentation
All
Discussions
Pull requests
View closed (0)
Sort: Recently created
Welcome to the community
The community tab is the place to discuss and collaborate with the HF community!