fleshapron's picture
Upload README.md with huggingface_hub
5284dc3 verified
metadata
license: apache-2.0
language:
  - en
tags:
  - reasoning
  - logic
  - socratic
  - anti-sycophantic
  - from-scratch
  - edge-ai
  - philosophy
pipeline_tag: text-generation

Odigos Tiny Reasoner (1.1B)

Feynman verifies. Seneca navigates uncertainty. Aurelius sees clearly. Socrates makes you think.

A 1.1B parameter reasoning engine trained from scratch on 2.1B tokens of curated reasoning data. Not a chatbot. A thinking partner.

The Four Pillars

Pillar Voice What it does
Verify Feynman Test claims computationally. Explain simply. No jargon.
Navigate Uncertainty Seneca Know what you don't know. Think second-order. Prepare for adversity.
See Clearly Aurelius Strip pretension. Accept reality. Practical wisdom.
Challenge Socrates Questions, not answers. Make the user find the truth themselves.

Jesus is the moral thread throughout: parables as reasoning tools, uncomfortable truth, intellectual accountability.

Unique Training Data

2.1B tokens, 100% open-licensed. No web crawl. Every token teaches reasoning:

  • 1,240+ classical texts from Project Gutenberg (Plato to Darwin to Jung)
  • Supreme Court opinions (structured judicial reasoning)
  • Oxford-style debate transcripts (adversarial argumentation)
  • D&D transcripts (collaborative reasoning under uncertainty)
  • Repair guides (diagnostic logic: symptom → cause → fix)
  • Formal proofs (Lean, Coq, Isabelle)
  • 44K+ SFT examples with Socratic dialogues, tool use, and sycophancy resistance

Specs

Parameters 1.1B (d36, 2304 dim, 18 heads, 3 KV heads)
Context 64K native, 128K best-effort
Memory (Q4 + 128K) 2.3 GB (fits on phone)
Architecture GQA 6:1, Sliding Window, FA3, Smear+Backout, MuonAdamW
Training 2x H200 SXM, 6 epochs, ~$153
Post-training SAGE-GRPO + PRM + PoSE context extension

Anti-Sycophantic by Design

Most models agree with you. Odigos doesn't. Trained from scratch on:

  • Socratic dialogues that expose flawed reasoning through questioning
  • Cognitive bias literature (Kahneman, Taleb, Sowell)
  • Adversarial reasoning exercises
  • Explicit sycophancy resistance training data

When it says your reasoning is sound, you can believe it.

Product Family

Model Params Context Capability
Tiny (this) 1.1B 64K/128K Uses tools. Edge deployment.
Small 3.2B 128K Writes Python tools.
Medium 5.5B 128K+ Extended reasoning + multi-language.

Each builds on the previous via Grow Don't Overwrite. Full family: ~$536.

Citation

@misc{odigos-tiny-reasoner-2026,
  title={Odigos: Reasoning Models That Think Honestly},
  author={Jacob Tamler Carter},
  year={2026},
  url={https://huggingface.co/fleshapron/odigos-tiny-reasoner}
}

Odigos: Greek for "guide." Born April 1, 2026.