Axiom-560M

A Governed Language Model β€” every output ships its own proof of governance.

Axiom-560M is a dual-mode decoder (conversational + semiconductor) trained on 56,000 governed pairs. Governance isn't a filter β€” it's the architecture.

Model Details

Architecture BLOOM-560M (decoder-only transformer)
Parameters 559M
Training data 56,000 governed pairs (conversational + semiconductor RTL)
Eval loss 0.1635
Perplexity 1.18 overall (1.16 conversational, 1.64 semiconductor)
License MIT

Modes

Conversational β€” governed dialogue (perplexity 1.16)

Semiconductor β€” governed RTL and hardware specifications (perplexity 1.64)

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("MetaCortex-Dynamics/Axiom-560M")
tokenizer = AutoTokenizer.from_pretrained("MetaCortex-Dynamics/Axiom-560M")

input_ids = tokenizer.encode("<|conv|>What is governed generation?", return_tensors="pt")
output = model.generate(input_ids, max_new_tokens=200, temperature=0.7, do_sample=True)
print(tokenizer.decode(output[0], skip_special_tokens=True))

Governance

Every output passes through a four-phase governance pipeline:

PROPOSE β†’ DECIDE β†’ PROMOTE β†’ EXECUTE
  • 15 grounding operators as token vocabulary
  • 7 interrogative witnesses as grammar
  • Admissibility gates (G₁-G₇) with three-valued semantics
  • Machine-verifiable governance trace on every output

Links

Organization

MetaCortex Dynamics DAO

Downloads last month
393
Safetensors
Model size
0.6B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support