Snider Virgil commited on
Commit
9a4f8d3
Β·
1 Parent(s): d669410

docs: correct base_model lineage for HF model tree

Browse files

HF uses base_model + base_model_relation frontmatter to rank models in
search results and render the model tree widget. The Lemma family's
true lineage is:

google/gemma-4-*-it
└── LetheanNetwork/<m> (finetune β€” our namespace fork)
└── lthn/<m> (finetune β€” LEK merged into weights)
└── lthn/<m>-mlx (quantized β€” mlx 4/8bit/bf16)

Previously this repo had base_model_relation set to quantized, which
was wrong β€” LEK merging is a finetune, not a quant. Fixing so the
model tree widget ranks the family correctly.

Co-Authored-By: Virgil <virgil@lethean.io>

Files changed (1) hide show
  1. README.md +32 -4
README.md CHANGED
@@ -1,9 +1,37 @@
1
  ---
2
  library_name: mlx
3
- license: apache-2.0
4
- license_link: https://ai.google.dev/gemma/docs/gemma_4_license
5
- pipeline_tag: text-generation
6
- base_model: google/gemma-4-26b-a4b-it
7
  tags:
 
 
8
  - mlx
 
 
 
 
 
 
 
 
 
 
9
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  library_name: mlx
3
+ pipeline_tag: image-text-to-text
 
 
 
4
  tags:
5
+ - gemma4
6
+ - lemma
7
  - mlx
8
+ - 4bit
9
+ - apple-silicon
10
+ - multimodal
11
+ - on-device
12
+ - conversational
13
+ license: eupl-1.2
14
+ license_link: https://ai.google.dev/gemma/docs/gemma_4_license
15
+ base_model:
16
+ - lthn/lemmy
17
+ base_model_relation: quantized
18
  ---
19
+
20
+ # Lemmy β€” Gemma 4 26B A4B MoE β€” MLX 4-bit
21
+
22
+ The Mixture-of-Experts member of the Lemma model family by Lethean. An EUPL-1.2 fork of Gemma 4 26B A4B with the Lethean Ethical Kernel (LEK) merged into the weights.
23
+
24
+ This repo hosts the **MLX 4-bit** build for native Apple Silicon inference via [`mlx-lm`](https://github.com/ml-explore/mlx-lm) and [`mlx-vlm`](https://github.com/Blaizzy/mlx-vlm). For the GGUF playground (Ollama, llama.cpp) see [`lthn/lemmy`](https://huggingface.co/lthn/lemmy). For the unmodified Google base see [`LetheanNetwork/lemmy`](https://huggingface.co/LetheanNetwork/lemmy).
25
+
26
+ ## Family
27
+
28
+ | Repo | Format | Bits |
29
+ |---|---|---|
30
+ | [`lthn/lemmy`](https://huggingface.co/lthn/lemmy) | GGUF multi-quant | Q4_K_M β†’ BF16 |
31
+ | [`lthn/lemmy-mlx`](https://huggingface.co/lthn/lemmy-mlx) | MLX | 4-bit |
32
+ | [`lthn/lemmy-mlx-8bit`](https://huggingface.co/lthn/lemmy-mlx-8bit) | MLX | 8-bit |
33
+ | [`lthn/lemmy-mlx-bf16`](https://huggingface.co/lthn/lemmy-mlx-bf16) | MLX | bf16 |
34
+
35
+ ## License
36
+
37
+ EUPL-1.2. See [Gemma Terms of Use](https://ai.google.dev/gemma/docs/gemma_4_license) for upstream base model terms.