| library_name: mlx | |
| license: apache-2.0 | |
| license_link: https://ai.google.dev/gemma/docs/gemma_4_license | |
| pipeline_tag: image-text-to-text | |
| base_model: | |
| - LetheanNetwork/lemma | |
| base_model_relation: quantized | |
| tags: | |
| - gemma4 | |
| - mlx | |
| - apple-silicon | |
| - 4bit | |
| - on-device | |
| - conversational | |
| # LetheanNetwork/lemma-mlx | |
| Gemma 4 in MLX format, 4-bit quantized, converted from | |
| [LetheanNetwork/lemma](https://huggingface.co/LetheanNetwork/lemma)'s bf16 | |
| safetensors via `mlx_lm.convert`. Unmodified Google weights hosted | |
| in the Lethean namespace so downstream tools don't have to depend | |
| on external mlx-community mirrors. | |
| For the LEK-merged sibling see [`lthn/lemma`](https://huggingface.co/lthn/lemma). | |
| ## License | |
| Apache 2.0, subject to the [Gemma Terms of Use](https://ai.google.dev/gemma/docs/gemma_4_license). | |