--- license: mit language: - en tags: - governed-language-model - semiconductor - conversational - governance pipeline_tag: text-generation --- # Axiom-560M **A Governed Language Model — every output ships its own proof of governance.** Axiom-560M is a dual-mode decoder (conversational + semiconductor) trained on 56,000 governed pairs. Governance isn't a filter — it's the architecture. ## Model Details | | | |---|---| | Architecture | BLOOM-560M (decoder-only transformer) | | Parameters | 559M | | Training data | 56,000 governed pairs (conversational + semiconductor RTL) | | Eval loss | 0.1635 | | Perplexity | 1.18 overall (1.16 conversational, 1.64 semiconductor) | | License | MIT | ## Modes **Conversational** — governed dialogue (perplexity 1.16) **Semiconductor** — governed RTL and hardware specifications (perplexity 1.64) ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("MetaCortex-Dynamics/Axiom-560M") tokenizer = AutoTokenizer.from_pretrained("MetaCortex-Dynamics/Axiom-560M") input_ids = tokenizer.encode("<|conv|>What is governed generation?", return_tensors="pt") output = model.generate(input_ids, max_new_tokens=200, temperature=0.7, do_sample=True) print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` ## Governance Every output passes through a four-phase governance pipeline: ``` PROPOSE → DECIDE → PROMOTE → EXECUTE ``` - 15 grounding operators as token vocabulary - 7 interrogative witnesses as grammar - Admissibility gates (G₁-G₇) with three-valued semantics - Machine-verifiable governance trace on every output ## Links - [Interactive Demo](https://huggingface.co/spaces/MetaCortex-Dynamics/Axiom-Ref) — try Axiom in your browser - [Source Code](https://github.com/MetaCortex-Dynamics/Axiom) — MIT license - [Benchmark Results](https://github.com/MetaCortex-Dynamics/Axiom/blob/main/BENCHMARKS.md) — 100% governance vs 0% all LLMs ## Organization [MetaCortex Dynamics DAO](https://github.com/MetaCortex-Dynamics)