--- library_name: mlx pipeline_tag: image-text-to-text tags: - gemma4 - lemma - mlx - 4bit - apple-silicon - multimodal - on-device - conversational license: eupl-1.2 license_link: https://ai.google.dev/gemma/docs/gemma_4_license base_model: - lthn/lemma base_model_relation: quantized --- # Lemma — Gemma 4 E4B — MLX 4-bit The mid-sized member of the Lemma model family by Lethean. An EUPL-1.2 fork of Gemma 4 E4B with the Lethean Ethical Kernel (LEK) merged into the weights. This repo hosts the **MLX 4-bit** build for native Apple Silicon inference via [`mlx-lm`](https://github.com/ml-explore/mlx-lm) and [`mlx-vlm`](https://github.com/Blaizzy/mlx-vlm). For the GGUF playground (Ollama, llama.cpp) see [`lthn/lemma`](https://huggingface.co/lthn/lemma). For the unmodified Google base see [`LetheanNetwork/lemma`](https://huggingface.co/LetheanNetwork/lemma). ## Family | Repo | Format | Bits | |---|---|---| | [`lthn/lemma`](https://huggingface.co/lthn/lemma) | GGUF multi-quant | Q4_K_M → BF16 | | [`lthn/lemma-mlx`](https://huggingface.co/lthn/lemma-mlx) | MLX | 4-bit | | [`lthn/lemma-mlx-8bit`](https://huggingface.co/lthn/lemma-mlx-8bit) | MLX | 8-bit | | [`lthn/lemma-mlx-bf16`](https://huggingface.co/lthn/lemma-mlx-bf16) | MLX | bf16 | ## License EUPL-1.2. See [Gemma Terms of Use](https://ai.google.dev/gemma/docs/gemma_4_license) for upstream base model terms.