lemrd-mlx / README.md
Snider
docs: correct base_model lineage for HF model tree
d320bab
metadata
tags:
  - gemma4
  - lemma
  - mlx
  - 4bit
  - apple-silicon
  - multimodal
  - on-device
  - conversational
pipeline_tag: image-text-to-text
library_name: mlx
license: eupl-1.2
license_link: https://ai.google.dev/gemma/docs/gemma_4_license
base_model:
  - lthn/lemrd
base_model_relation: quantized

Lemrd — Gemma 4 31B Dense — MLX 4-bit

The largest dense member of the Lemma model family by Lethean. An EUPL-1.2 fork of Gemma 4 31B with the Lethean Ethical Kernel (LEK) merged into the weights.

This repo hosts the MLX 4-bit build for native Apple Silicon inference via mlx-lm and mlx-vlm. For the GGUF playground (Ollama, llama.cpp) see lthn/lemrd. For the unmodified Google base see LetheanNetwork/lemrd.

Family

Repo Format Bits
lthn/lemrd GGUF multi-quant Q4_K_M → BF16
lthn/lemrd-mlx MLX 4-bit
lthn/lemrd-mlx-8bit MLX 8-bit
lthn/lemrd-mlx-bf16 MLX bf16

License

EUPL-1.2. See Gemma Terms of Use for upstream base model terms.