| library_name: mlx | |
| license: apache-2.0 | |
| license_link: https://ai.google.dev/gemma/docs/gemma_4_license | |
| pipeline_tag: image-text-to-text | |
| base_model: | |
| - LetheanNetwork/lemmy | |
| base_model_relation: quantized | |
| tags: | |
| - gemma4 | |
| - mlx | |
| - apple-silicon | |
| - 8bit | |
| - moe | |
| - mixture-of-experts | |
| - on-device | |
| - conversational | |
| # LetheanNetwork/lemmy-mlx-8bit | |
| Gemma 4 26B A4B MoE in MLX format, 8-bit quantized, converted from | |
| [LetheanNetwork/lemmy](https://huggingface.co/LetheanNetwork/lemmy)'s bf16 | |
| safetensors via `mlx_lm.convert`. Higher-precision sibling of | |
| [`LetheanNetwork/lemmy-mlx`](https://huggingface.co/LetheanNetwork/lemmy-mlx) | |
| (4-bit). For the LEK-merged variant see | |
| [`lthn/lemmy`](https://huggingface.co/lthn/lemmy). | |
| ## License | |
| Apache 2.0, subject to the [Gemma Terms of Use](https://ai.google.dev/gemma/docs/gemma_4_license). | |