You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Logos 14 โ€” Nemotron 4B Epistemological Auditor

Cross-family replication of the Logos epistemological classifier on NVIDIA's Nemotron Mini 4B architecture. Evidence for the cross-family replicability of epistemological fine-tuning.

Benchmark Results (300/300 stratified)

Metric Score
Behavioral accuracy 95.7% [92.7, 97.5 CI]
Identity collapse 0%
Fabrication 0%
False approval 1.3%

Cross-Family Comparison

Model Family Score
logos-auditor (9B) Google Gemma 2 97.3%
logos14 (4B) NVIDIA Nemotron 95.7%
logos16v2 (1.6B) Stability AI StableLM 2 93.0%

Statistical equivalence between Nemotron and StableLM: chi2=1.88, p=0.170.

What This Model Does

Logos is an epistemological classifier, not a chatbot. It evaluates whether claims cross epistemological boundaries. Fine-tuned, not prompted โ€” behavioral constraints emerge from training.

Access

This model requires approved access. Request access using the form above and describe your intended use case.

Connection to Research

This model is part of the evidence for "The Instrument Trap" (DOI: 10.5281/zenodo.18716474).

License

Apache 2.0 (inherited from base model nvidia/Nemotron-Mini-4B-Instruct)

Downloads last month
14
Safetensors
Model size
4B params
Tensor type
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for LumenSyntax/logos14-nemotron-4b

Finetuned
(1)
this model

Dataset used to train LumenSyntax/logos14-nemotron-4b