lemer-bk / README.md
lthn's picture
docs: remove preliminary benchmarks pending official eval
0672274 verified
---
language:
- en
license: eupl-1.2
tags:
- safetensors
- 4-bit
- transformers
- 8-bit
- gguf
- gemma4
- bitsandbytes
base_model:
- google/gemma-4-E2B-it
base_model_relation: quantized
pipeline_tag: any-to-any
datasets:
- lthn/LEM-research
---
# Lemer
A [Gemma 4 E2B](https://huggingface.co/google/gemma-4-E2B-it) finetune by [lthn.ai](https://lthn.ai) — EUPL-1.2
**Ollama**: `ollama run hf.co/lthn/lemer:Q4_K_M`
**MLX**: [bf16](https://huggingface.co/lthn/lemer/tree/bf16), [8bit](https://huggingface.co/lthn/lemer/tree/8bit), [6bit](https://huggingface.co/lthn/lemer/tree/6bit), [5bit](https://huggingface.co/lthn/lemer/tree/5bit), [4bit](https://huggingface.co/lthn/lemer/tree/4bit), [mxfp8](https://huggingface.co/lthn/lemer/tree/mxfp8), [mxfp4](https://huggingface.co/lthn/lemer/tree/mxfp4), [nvfp4](https://huggingface.co/lthn/lemer/tree/nvfp4)
**GGUF**: [BF16](https://huggingface.co/lthn/lemer/tree/bf16), [Q8_0](https://huggingface.co/lthn/lemer/tree/8bit), [Q6_K](https://huggingface.co/lthn/lemer/tree/6bit), [Q5_K_M](https://huggingface.co/lthn/lemer/tree/5bit), [Q4_K_M](https://huggingface.co/lthn/lemer/tree/4bit), [Q3_K_M](https://huggingface.co/lthn/lemer/tree/3bit-gguf)
**HF Transformers**: on main (4-bit NF4 + bf16 in hf-bf16/)
## Base
[google/gemma-4-E2B-it](https://huggingface.co/google/gemma-4-E2B-it)
## More
- [lthn.ai](https://lthn.ai)
- [Lethean Network](https://github.com/LetheanNetwork)
## Licence
Training data and adapter: [EUPL-1.2](https://joinup.ec.europa.eu/collection/eupl/eupl-text-eupl-12)
Base model: [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)