Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
mlx-community
/
gemma-4-e2b-4bit
like
0
Follow
MLX Community
10.6k
Any-to-Any
MLX
Safetensors
gemma4
4-bit precision
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
1
Use this model
New discussion
New pull request
Resources
PR & discussions documentation
Code of Conduct
Hub documentation
All
Discussions
Pull requests
View closed (0)
Sort: Recently created
⚠️ Existing MLX quantized Gemma 4 models (mlx-community, unsloth) produce garbage output due to quantizing PLE (Per-Layer Embedding) layers.
2
#1 opened about 17 hours ago by
Alkd