Lemer
A Gemma 4 E2B finetune by lthn.ai — EUPL-1.2
Ollama: ollama run hf.co/lthn/lemer:Q4_K_M
MLX: bf16, 8bit, 6bit, 5bit, 4bit, mxfp8, mxfp4, nvfp4
GGUF: BF16, Q8_0, Q6_K, Q5_K_M, Q4_K_M, Q3_K_M
HF Transformers: on main (4-bit NF4 + bf16 in hf-bf16/)
Base
More
Licence
Training data and adapter: EUPL-1.2 Base model: Apache 2.0
- Downloads last month
- 1,507
Hardware compatibility
Log In to add your hardware
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for LetheanNetwork/lemer-bk
Base model
google/gemma-4-E2B-it