lemmy-mlx / README.md
Snider
feat: initial mlx 4bit quant, converted from LetheanNetwork/lemmy bf16
e98f4fc
metadata
library_name: mlx
license: apache-2.0
license_link: https://ai.google.dev/gemma/docs/gemma_4_license
pipeline_tag: image-text-to-text
base_model:
  - LetheanNetwork/lemmy
base_model_relation: quantized
tags:
  - gemma4
  - mlx
  - apple-silicon
  - 4bit
  - moe
  - mixture-of-experts
  - on-device
  - conversational

LetheanNetwork/lemmy-mlx

Gemma 4 26B A4B MoE in MLX format, 4-bit quantized, converted from LetheanNetwork/lemmy's bf16 safetensors via mlx_lm.convert. Unmodified Google weights hosted in the Lethean namespace so downstream tools don't have to depend on external mlx-community mirrors.

For the LEK-merged sibling see lthn/lemmy.

License

Apache 2.0, subject to the Gemma Terms of Use.