File size: 817 Bytes
dcdb5f9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
---
library_name: mlx
license: apache-2.0
license_link: https://ai.google.dev/gemma/docs/gemma_4_license
pipeline_tag: image-text-to-text
base_model:
  - LetheanNetwork/lemma
base_model_relation: quantized
tags:
  - gemma4
  - mlx
  - apple-silicon
  - 4bit
  - on-device
  - conversational
---

# LetheanNetwork/lemma-mlx

Gemma 4 in MLX format, 4-bit quantized, converted from
[LetheanNetwork/lemma](https://huggingface.co/LetheanNetwork/lemma)'s bf16
safetensors via `mlx_lm.convert`. Unmodified Google weights hosted
in the Lethean namespace so downstream tools don't have to depend
on external mlx-community mirrors.

For the LEK-merged sibling see [`lthn/lemma`](https://huggingface.co/lthn/lemma).

## License

Apache 2.0, subject to the [Gemma Terms of Use](https://ai.google.dev/gemma/docs/gemma_4_license).