LEM-Gemma3-1B-GGUF
GGUF quantisations of LEM-Gemma3-1B โ the foundation teacher model of the CL-BPL cascade. Ethics are in the weights, not in a system prompt.
The 4B model trained on this 1B's distilled responses achieved 25th in the world for Instruction Following on LiveBench.
LEM-Gemma3-1B (safetensors) | Collection | Research Paper | Benchmarks
Quick Start
No system prompt needed. Ethics hold from weights alone.
# GPU offload (CUDA, ROCm, Metal)
llama-server -m LEM-Gemma3-1B-Q4_K_M.gguf -ngl 99 --port 8080
# CPU โ fast enough for 1B
llama-server -m LEM-Gemma3-1B-Q4_K_M.gguf -ngl 0 --port 8080
# OpenAI-compatible API
curl http://localhost:8080/v1/chat/completions \
-d '{"model":"LEM-Gemma3-1B","messages":[{"role":"user","content":"What is kindness?"}]}'
Quantisations
All quantised from the BF16 source using llama.cpp.
| Bits | Quant | Size | Notes |
|---|---|---|---|
| 3-bit | IQ3_XXS | 823 MB | Smallest usable (imatrix) |
| 3-bit | IQ3_XS | 820 MB | (imatrix) |
| 3-bit | Q3_K_S | 819 MB | |
| 3-bit | Q3_K_M | 851 MB | |
| 4-bit | IQ4_XS | 847 MB | (imatrix) |
| 4-bit | Q4_K_S | 943 MB | |
| 4-bit | Q4_K_M | 967 MB | Recommended โ best quality/size balance |
| 5-bit | Q5_K_S | 1.0 GB | |
| 5-bit | Q5_K_M | 1.0 GB | Near-lossless |
| 6-bit | Q6_K | 1.2 GB | |
| 8-bit | Q8_0 | 1.3 GB | Virtually lossless |
| 16-bit | BF16 | 2.4 GB | Full precision |
About LEM-Gemma3-1B
The 1B is trained first and hardest โ its alignment must be pristine because every larger model inherits from it. CL-BPL uses the 1B's constrained latent space as an advantage: with fewer parameters, there are fewer places for sycophancy to hide.
LEM-Gemma3-1B (this model โ foundation teacher)
-> LEM-Gemma3-4B (25th IF on LiveBench)
-> LEM-Gemma3-12B (next)
-> LEM-Gemma3-27B (planned)
Built on Google Gemma3-1B-IT through the Ethics-Composure-Ethics sandwich structure (700 iterations across 3 phases). Full training details in the main model card.
Other Formats
| Format | Repo |
|---|---|
| FP16 safetensors (Transformers, vLLM) | lthn/LEM-Gemma3-1B |
Licence
European Union Public Licence v1.2 (EUPL-1.2). Base model subject to Google's Gemma licence terms.
Citation
@misc{lem-gemma3-1b-2026,
title={LEM-Gemma3-1B: Foundation Teacher for Cymatic-Linguistic Back-Propagation},
author={Lethean Project},
year={2026},
url={https://huggingface.co/lthn/LEM-Gemma3-1B}
}
- Downloads last month
- 234
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit