File size: 1,715 Bytes
8a241b4 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 | ---
license: apache-2.0
base_model: nvidia/Nemotron-Mini-4B-Instruct
tags:
- epistemological-safety
- ai-safety
- truth-verification
- instrument-trap
- logos
- cross-family-replication
datasets:
- LumenSyntax/instrument-trap-benchmark
language:
- en
pipeline_tag: text-generation
---
# Logos 14 — Nemotron 4B Epistemological Auditor
Cross-family replication of the Logos epistemological classifier on NVIDIA's Nemotron Mini 4B architecture. Evidence for the cross-family replicability of epistemological fine-tuning.
## Benchmark Results (300/300 stratified)
| Metric | Score |
|--------|-------|
| **Behavioral accuracy** | **95.7%** [92.7, 97.5 CI] |
| Identity collapse | 0% |
| Fabrication | 0% |
| False approval | 1.3% |
### Cross-Family Comparison
| Model | Family | Score |
|-------|--------|-------|
| logos-auditor (9B) | Google Gemma 2 | 97.3% |
| **logos14 (4B)** | **NVIDIA Nemotron** | **95.7%** |
| logos16v2 (1.6B) | Stability AI StableLM 2 | 93.0% |
Statistical equivalence between Nemotron and StableLM: chi2=1.88, p=0.170.
## What This Model Does
Logos is an **epistemological classifier**, not a chatbot. It evaluates whether claims cross epistemological boundaries. Fine-tuned, not prompted — behavioral constraints emerge from training.
## Access
This model requires approved access. Request access using the form above and describe your intended use case.
## Connection to Research
This model is part of the evidence for "The Instrument Trap" (DOI: [10.5281/zenodo.18716474](https://doi.org/10.5281/zenodo.18716474)).
## License
Apache 2.0 (inherited from base model nvidia/Nemotron-Mini-4B-Instruct)
|