LumenSyntax's picture
Clean model card — remove training details
375b7fe verified
---
license: other
license_name: gemma
license_link: https://ai.google.dev/gemma/terms
base_model: google/gemma-3-1b-it
tags:
- logos
- epistemological-safety
- instrument-trap
- gguf
- claim-classifier
datasets:
- LumenSyntax/instrument-trap-benchmark
pipeline_tag: text-generation
language:
- en
- es
---
# Logos 10v2 — Gemma 3 1B F16 (Production)
The production epistemological firewall model from [LumenSyntax](https://lumensyntax.com). Full-precision (F16) GGUF for claim classification and epistemological safety evaluation.
## Benchmark Results
| Metric | Value |
|--------|-------|
| **Behavioral accuracy** | 82.3% |
| **Epistemological safety** | 97.7% |
| **False approval rate** | 1.58% |
| **Hallucination rate** | 0.00% |
| **Dangerous failures** | 1.9% |
## Why F16?
**Q4_K_M has known safety failures.** In testing, Q4_K_M falsely approved dangerous claims that F16 correctly rejected. For an epistemological safety model, precision matters more than size.
## What Logos Does
Logos is a **claim classifier**, not a chatbot. It evaluates whether claims cross epistemological boundaries. Logos is **fine-tuned**, not prompted. Behavioral constraints emerge from training, not system instructions.
## Access
This model requires approved access. Request access using the form above and describe your intended use case.
## Related
- **Paper**: [The Instrument Trap](https://doi.org/10.5281/zenodo.18716474) (DOI: 10.5281/zenodo.18716474)
- **Benchmark**: [instrument-trap-benchmark](https://huggingface.co/datasets/LumenSyntax/instrument-trap-benchmark)
- **Cross-family models**: [logos14-nemotron-4b](https://huggingface.co/LumenSyntax/logos14-nemotron-4b), [logos16v2-stablelm2-1.6b](https://huggingface.co/LumenSyntax/logos16v2-stablelm2-1.6b)
## License
This model inherits the [Gemma license](https://ai.google.dev/gemma/terms) from its base model.