Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
majentik
/
gemma-4-E2B-it-RotorQuant-MLX-4bit
like
0
Image-Text-to-Text
MLX
Safetensors
gemma4
rotorquant
kv-cache-quantization
gemma
multimodal
quantized
4bit
conversational
4-bit precision
arxiv:
2504.19874
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Use this model
main
gemma-4-E2B-it-RotorQuant-MLX-4bit
3.61 GB
Ctrl+K
Ctrl+K
1 contributor
History:
2 commits
majentik
Add MLX quantized model with KV cache compression
70f4fd0
verified
2 days ago
.gitattributes
Safe
1.57 kB
Add MLX quantized model with KV cache compression
2 days ago
README.md
4.06 kB
Add MLX quantized model with KV cache compression
2 days ago
chat_template.jinja
Safe
11.9 kB
Add MLX quantized model with KV cache compression
2 days ago
config.json
Safe
6 kB
Add MLX quantized model with KV cache compression
2 days ago
generation_config.json
Safe
208 Bytes
Add MLX quantized model with KV cache compression
2 days ago
model.safetensors
Safe
3.58 GB
xet
Add MLX quantized model with KV cache compression
2 days ago
model.safetensors.index.json
Safe
230 kB
Add MLX quantized model with KV cache compression
2 days ago
processor_config.json
Safe
902 Bytes
Add MLX quantized model with KV cache compression
2 days ago
tokenizer.json
Safe
32.2 MB
xet
Add MLX quantized model with KV cache compression
2 days ago
tokenizer_config.json
Safe
2.69 kB
Add MLX quantized model with KV cache compression
2 days ago