| library_name: mlx | |
| pipeline_tag: image-text-to-text | |
| tags: | |
| - gemma4 | |
| - lemma | |
| - mlx | |
| - 4bit | |
| - apple-silicon | |
| - multimodal | |
| - on-device | |
| - conversational | |
| license: eupl-1.2 | |
| license_link: https://ai.google.dev/gemma/docs/gemma_4_license | |
| base_model: | |
| - lthn/lemmy | |
| base_model_relation: quantized | |
| # Lemmy — Gemma 4 26B A4B MoE — MLX 4-bit | |
| The Mixture-of-Experts member of the Lemma model family by Lethean. An EUPL-1.2 fork of Gemma 4 26B A4B with the Lethean Ethical Kernel (LEK) merged into the weights. | |
| This repo hosts the **MLX 4-bit** build for native Apple Silicon inference via [`mlx-lm`](https://github.com/ml-explore/mlx-lm) and [`mlx-vlm`](https://github.com/Blaizzy/mlx-vlm). For the GGUF playground (Ollama, llama.cpp) see [`lthn/lemmy`](https://huggingface.co/lthn/lemmy). For the unmodified Google base see [`LetheanNetwork/lemmy`](https://huggingface.co/LetheanNetwork/lemmy). | |
| ## Family | |
| | Repo | Format | Bits | | |
| |---|---|---| | |
| | [`lthn/lemmy`](https://huggingface.co/lthn/lemmy) | GGUF multi-quant | Q4_K_M → BF16 | | |
| | [`lthn/lemmy-mlx`](https://huggingface.co/lthn/lemmy-mlx) | MLX | 4-bit | | |
| | [`lthn/lemmy-mlx-8bit`](https://huggingface.co/lthn/lemmy-mlx-8bit) | MLX | 8-bit | | |
| | [`lthn/lemmy-mlx-bf16`](https://huggingface.co/lthn/lemmy-mlx-bf16) | MLX | bf16 | | |
| ## License | |
| EUPL-1.2. See [Gemma Terms of Use](https://ai.google.dev/gemma/docs/gemma_4_license) for upstream base model terms. | |