| library_name: mlx | |
| license: apache-2.0 | |
| license_link: https://ai.google.dev/gemma/docs/gemma_4_license | |
| pipeline_tag: image-text-to-text | |
| base_model: | |
| - LetheanNetwork/lemma | |
| base_model_relation: quantized | |
| tags: | |
| - gemma4 | |
| - mlx | |
| - apple-silicon | |
| - 8bit | |
| - on-device | |
| - conversational | |
| # LetheanNetwork/lemma-mlx-8bit | |
| Gemma 4 in MLX format, 8-bit quantized, converted from | |
| [LetheanNetwork/lemma](https://huggingface.co/LetheanNetwork/lemma)'s bf16 | |
| safetensors via `mlx_lm.convert`. Higher-precision sibling of | |
| [`LetheanNetwork/lemma-mlx`](https://huggingface.co/LetheanNetwork/lemma-mlx) | |
| (4-bit). For the LEK-merged variant see | |
| [`lthn/lemma`](https://huggingface.co/lthn/lemma). | |
| ## License | |
| Apache 2.0, subject to the [Gemma Terms of Use](https://ai.google.dev/gemma/docs/gemma_4_license). | |