Quantized 4bit MLX text-only model converted from https://huggingface.co/google/gemma-3-12b-it using mlx-lm 0.22.2
- Downloads last month
- 9
Model size
2B params
Tensor type
BF16
·
U32
·
Hardware compatibility
Log In
to add your hardware
4-bit