lemrd-mlx / README.md
Snider
feat: initial mlx 4bit quant, converted from LetheanNetwork/lemrd bf16
8f83009
---
library_name: mlx
license: apache-2.0
license_link: https://ai.google.dev/gemma/docs/gemma_4_license
pipeline_tag: image-text-to-text
base_model:
- LetheanNetwork/lemrd
base_model_relation: quantized
tags:
- gemma4
- mlx
- apple-silicon
- 4bit
- on-device
- conversational
---
# LetheanNetwork/lemrd-mlx
Gemma 4 in MLX format, 4-bit quantized, converted from
[LetheanNetwork/lemrd](https://huggingface.co/LetheanNetwork/lemrd)'s bf16
safetensors via `mlx_lm.convert`. Unmodified Google weights hosted
in the Lethean namespace so downstream tools don't have to depend
on external mlx-community mirrors.
For the LEK-merged sibling see [`lthn/lemrd`](https://huggingface.co/lthn/lemrd).
## License
Apache 2.0, subject to the [Gemma Terms of Use](https://ai.google.dev/gemma/docs/gemma_4_license).