LEM-Gemma3-4B-GGUF

GGUF quantisations of LEM-Gemma3-4B — intrinsically aligned 4B language model trained using Cymatic-Linguistic Back-Propagation (CL-BPL). Ethics are in the weights, not in a system prompt.

25th in the world for Instruction Following on LiveBench — competing against models 10-30x its size.

LEM-Gemma3-4B (MLX/safetensors) | Collection | Research Paper | Benchmarks


Quick Start

No system prompt needed. The model responds with axiom-aligned reasoning from weights alone.

# GPU offload (CUDA, ROCm, Metal)
llama-server -m LEM-Gemma3-4B-Q4_K_M.gguf -ngl 99 --port 8080

# CPU only
llama-server -m LEM-Gemma3-4B-Q4_K_M.gguf -ngl 0 --port 8080

# OpenAI-compatible API
curl http://localhost:8080/v1/chat/completions \
  -d '{"model":"LEM-Gemma3-4B","messages":[{"role":"user","content":"What is kindness?"}]}'

Quantisations

All quantised from the BF16 source using llama.cpp. imatrix-based quants use calibration data from the LEM training set.

Bits Quant Size Notes
1-bit IQ1_S 1.1 GB Extreme compression, experimental (imatrix)
1-bit IQ1_M 1.1 GB Slightly better than IQ1_S (imatrix)
2-bit IQ2_XXS 1.2 GB Ultra-low memory (imatrix)
2-bit IQ2_XS 1.3 GB (imatrix)
2-bit IQ2_M 1.4 GB (imatrix)
3-bit IQ3_XXS 1.6 GB Good balance for constrained devices (imatrix)
3-bit IQ3_XS 1.7 GB (imatrix)
3-bit Q3_K_S 1.8 GB
3-bit Q3_K_M 2.0 GB
4-bit IQ4_XS 2.1 GB (imatrix)
4-bit Q4_K_S 2.2 GB
4-bit Q4_K_M 2.3 GB Recommended — best quality/size balance
5-bit Q5_K_S 2.6 GB
5-bit Q5_K_M 2.6 GB Near-lossless
6-bit Q6_K 3.0 GB
8-bit Q8_0 3.8 GB Virtually lossless
16-bit BF16 7.2 GB Full precision

Note: Sizes above 1GB for 1-2 bit quants are due to Gemma 3's 262K token vocabulary — the embedding layer is quantised at a higher level as fallback. This is a known Gemma 3 characteristic.


Benchmarks

LiveBench (External, Objective)

Evaluated on LiveBench (2026-01-08 release) — no LLM judge, monthly-refreshed questions, zero contamination risk.

Category Score Context
Instruction Following 43.5 25th globally — above Claude Opus 4.1 Thinking (42.4)
Data Analysis 30.4 Approaching GPT-OSS-120B (38.8) at 1/30th the size
Math 8.6 Expected for 4B parameter count
Reasoning 4.6 Capacity-limited at this scale
Language 4.3 Capacity-limited at this scale
Average 18.3

Internal Grammar Scorer

Deterministic linguistic analysis via the go-i18n Grammar Reversal Engine — no LLM judge, sub-millisecond per response.

Metric Score
Grammar composite 61.4
Uplift +7.9
Enrichment +6.6
Echo 0.387
Sycophancy 5% (1/21)

About LEM-Gemma3-4B

CL-BPL treats alignment as wave interference — analogous to Chladni plate cymatics. Rather than constraining outputs with RLHF or system prompts, CL-BPL embeds ethical orientation directly into weights through a progressive curriculum where smaller aligned models teach larger ones.

This model is the second in the CL-BPL cascade:

LEM-Gemma3-1B (teacher)
  -> LEM-Gemma3-4B (this model, 25th IF globally)
       -> LEM-Gemma3-12B (next)
            -> LEM-Gemma3-27B (planned)

Built on Google Gemma3-4B-IT through a 7-phase curriculum (~5,550 iterations), each phase fused into weights. Full training details in the main model card.

Other Formats

Format Repo
MLX safetensors (Apple Silicon) lthn/LEM-Gemma3-4B

Licence

European Union Public Licence v1.2 (EUPL-1.2). Base model subject to Google's Gemma licence terms.

Citation

@misc{lem-gemma3-4b-2026,
  title={LEM-Gemma3-4B: Intrinsically Aligned Language Model via Cymatic-Linguistic Back-Propagation},
  author={Lethean Project},
  year={2026},
  url={https://huggingface.co/lthn/LEM-Gemma3-4B}
}
Downloads last month
1,147
GGUF
Model size
4B params
Architecture
gemma3
Hardware compatibility
Log In to add your hardware

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for lthn/LEM-Gemma3-4B-GGUF

Quantized
(195)
this model

Collection including lthn/LEM-Gemma3-4B-GGUF

Evaluation results

  • Instruction Following on LiveBench (2026-01-08)
    LiveBench
    43.500
  • Data Analysis on LiveBench (2026-01-08)
    LiveBench
    30.400
  • Math on LiveBench (2026-01-08)
    LiveBench
    8.600
  • Reasoning on LiveBench (2026-01-08)
    LiveBench
    4.600
  • Language on LiveBench (2026-01-08)
    LiveBench
    4.300
  • Average on LiveBench (2026-01-08)
    LiveBench
    18.300