Snider Virgil commited on
Commit Β·
8097c60
1
Parent(s): 1de7f3f
docs: correct base_model lineage for HF model tree
Browse filesHF uses base_model + base_model_relation frontmatter to rank models in
search results and render the model tree widget. The Lemma family's
true lineage is:
google/gemma-4-*-it
βββ LetheanNetwork/<m> (finetune β our namespace fork)
βββ lthn/<m> (finetune β LEK merged into weights)
βββ lthn/<m>-mlx (quantized β mlx 4/8bit/bf16)
Previously this repo had base_model_relation set to quantized, which
was wrong β LEK merging is a finetune, not a quant. Fixing so the
model tree widget ranks the family correctly.
Co-Authored-By: Virgil <virgil@lethean.io>
README.md
CHANGED
|
@@ -1,7 +1,37 @@
|
|
| 1 |
---
|
| 2 |
-
language: en
|
| 3 |
tags:
|
|
|
|
|
|
|
| 4 |
- mlx
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
pipeline_tag: image-text-to-text
|
| 6 |
library_name: mlx
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
|
|
|
| 2 |
tags:
|
| 3 |
+
- gemma4
|
| 4 |
+
- lemma
|
| 5 |
- mlx
|
| 6 |
+
- bf16
|
| 7 |
+
- apple-silicon
|
| 8 |
+
- multimodal
|
| 9 |
+
- on-device
|
| 10 |
+
- conversational
|
| 11 |
pipeline_tag: image-text-to-text
|
| 12 |
library_name: mlx
|
| 13 |
+
license: eupl-1.2
|
| 14 |
+
license_link: https://ai.google.dev/gemma/docs/gemma_4_license
|
| 15 |
+
base_model:
|
| 16 |
+
- lthn/lemrd
|
| 17 |
+
base_model_relation: quantized
|
| 18 |
---
|
| 19 |
+
|
| 20 |
+
# Lemrd β Gemma 4 31B Dense β MLX bf16 (full precision)
|
| 21 |
+
|
| 22 |
+
The largest dense member of the Lemma model family by Lethean. An EUPL-1.2 fork of Gemma 4 31B with the Lethean Ethical Kernel (LEK) merged into the weights.
|
| 23 |
+
|
| 24 |
+
This repo hosts the **MLX bf16 (full precision)** build for native Apple Silicon inference via [`mlx-lm`](https://github.com/ml-explore/mlx-lm) and [`mlx-vlm`](https://github.com/Blaizzy/mlx-vlm). For the GGUF playground (Ollama, llama.cpp) see [`lthn/lemrd`](https://huggingface.co/lthn/lemrd). For the unmodified Google base see [`LetheanNetwork/lemrd`](https://huggingface.co/LetheanNetwork/lemrd).
|
| 25 |
+
|
| 26 |
+
## Family
|
| 27 |
+
|
| 28 |
+
| Repo | Format | Bits |
|
| 29 |
+
|---|---|---|
|
| 30 |
+
| [`lthn/lemrd`](https://huggingface.co/lthn/lemrd) | GGUF multi-quant | Q4_K_M β BF16 |
|
| 31 |
+
| [`lthn/lemrd-mlx`](https://huggingface.co/lthn/lemrd-mlx) | MLX | 4-bit |
|
| 32 |
+
| [`lthn/lemrd-mlx-8bit`](https://huggingface.co/lthn/lemrd-mlx-8bit) | MLX | 8-bit |
|
| 33 |
+
| [`lthn/lemrd-mlx-bf16`](https://huggingface.co/lthn/lemrd-mlx-bf16) | MLX | bf16 |
|
| 34 |
+
|
| 35 |
+
## License
|
| 36 |
+
|
| 37 |
+
EUPL-1.2. See [Gemma Terms of Use](https://ai.google.dev/gemma/docs/gemma_4_license) for upstream base model terms.
|