id stringlengths 23 23 | doi stringlengths 23 23 | title stringlengths 31 226 | authors listlengths 1 1 | language stringclasses 1
value | license dict | urls dict | keywords listlengths 1 8 | fulltext dict | equations listlengths 19 614 | inline_citations listlengths 0 0 | chunks listlengths 1 8 | tokens dict | quality_flags listlengths 21 1.01k | source_file stringlengths 20 119 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
10.5281/zenodo.17168036 | 10.5281/zenodo.17168036 | A BUILDABLE NO-META BLUEPRINT: UGV & Persistence-First for Intrinsically Free and Benevolent Superintelligence | [
{
"given": "K.",
"family": "Takahashi"
}
] | en | {
"content": "CC-BY-4.0"
} | {
"landing": "https://doi.org/10.5281/zenodo.17168036"
} | [
"ugv"
] | {
"plain": "Notation, Acronyms, and Symbols\n\nWorld/channel. [[EQ:eq0020]] is a Markov kernel [[EQ:eq0021]] . Post-coarsening kernels are [[EQ:eq0022]] acting on the output of [[EQ:eq0023]] with [[EQ:eq0024]] (conditioning [[EQ:eq0025]] -algebra) fixed.\n\nPolicy. [[EQ:eq0026]] with [[EQ:eq0027]] ; [[EQ:eq0028]] is ... | [
{
"id": "eq0001",
"inline": false,
"tex": "\\[\nU(x,\\cdot)\\ \\ge\\ \\nu(\\cdot)\\quad\\text{and}\\quad H_\\zeta(x,\\cdot)\\ \\ge\\ \\zeta\\,\\nu(\\cdot).\n\\]",
"tex_normalized": "U(x,\\cdot)\\ \\ge\\ \\nu(\\cdot)\\quad\\text{and}\\quad H_\\zeta(x,\\cdot)\\ \\ge\\ \\zeta \\nu(\\cdot).",
"mathm... | [] | [
{
"id": "ch0001",
"type": "section",
"ref": "notation-acronyms-and-symbols",
"start": 0,
"end": 6000
},
{
"id": "ch0002",
"type": "continuation",
"ref": "representation-lifts-graph-field-quantum",
"start": 5400,
"end": 11400
},
{
"id": "ch0003",
"type": "conti... | {
"char_count": 21265,
"equation_count": 217
} | [
"missing_placeholder:eq0020",
"missing_placeholder:eq0021",
"missing_placeholder:eq0022",
"missing_placeholder:eq0023",
"missing_placeholder:eq0024",
"missing_placeholder:eq0025",
"missing_placeholder:eq0026",
"missing_placeholder:eq0027",
"missing_placeholder:eq0028",
"missing_placeholder:eq0029"... | A_Buildable_No_Meta_Blueprint.zip |
10.5281/zenodo.17141216 | 10.5281/zenodo.17141216 | A FORMAL AXIOMATIC PROPOSAL FOR HAWKINS' LEVELS OF CONSCIOUSNESS | [
{
"given": "K.",
"family": "Takahashi"
}
] | en | {
"content": "CC-BY-4.0"
} | {
"landing": "https://doi.org/10.5281/zenodo.17141216"
} | [
"eq",
"lower",
"zenodo",
"let",
"let-eq"
] | {"plain":"Axioms and scope\n\nWe work in continuous time [[EQ:eq0018]] and space [[EQ:eq0019]] , whe(...TRUNCATED) | [{"id":"eq0001","inline":false,"tex":"\\begin{equation}\n\\partial_t u \\;=\\; \\nabla\\!\\cdot\\!\\(...TRUNCATED) | [] | [{"id":"ch0001","type":"section","ref":"axioms-and-scope","start":0,"end":6000},{"id":"ch0002","type(...TRUNCATED) | {
"char_count": 14394,
"equation_count": 148
} | ["missing_placeholder:eq0004","missing_placeholder:eq0005","missing_placeholder:eq0006","missing_pla(...TRUNCATED) | A_Formal_Axiomatic_Proposal_for_Hawkins__Levels_of_Consciousness.zip |
10.5281/zenodo.17199498 | 10.5281/zenodo.17199498 | A NATURAL-LAW THEORY OF FUNDAMENTAL SUFFERING | [
{
"given": "K.",
"family": "Takahashi"
}
] | en | {
"content": "CC-BY-4.0"
} | {
"landing": "https://doi.org/10.5281/zenodo.17199498"
} | [
"no-meta"
] | {"plain":"% crisp, searchable glyphs\n\n1.2\n\nassumption\ndefinition\ntheorem\nproposition\nlemma\n(...TRUNCATED) | [{"id":"eq0001","inline":false,"tex":"\\begin{equation}\n\\partial_t u_f + \\divg \\J = s_f - r_f,\n(...TRUNCATED) | [] | [{"id":"ch0001","type":"section","ref":"plain-language-summary","start":0,"end":6000},{"id":"ch0002"(...TRUNCATED) | {
"char_count": 25527,
"equation_count": 203
} | ["pandoc_missing_placeholders","pandoc_fallback","missing_placeholder:eq0010","missing_placeholder:e(...TRUNCATED) | A_Natural_Law_Theory_of_Fundamental_Suffering.zip |
10.5281/zenodo.17204755 | 10.5281/zenodo.17204755 | Doctrine => Closure => Motion => Time: Portable Pure Theory of Non-Dual Harmony | [
{
"given": "K.",
"family": "Takahashi"
}
] | en | {
"content": "CC-BY-4.0"
} | {
"landing": "https://doi.org/10.5281/zenodo.17204755"
} | [
"eq",
"if",
"then",
"section",
"then-eq"
] | {"plain":"hyperref\n\nplain\ntheorem Theorem [section]\nproposition[theorem] Proposition\nlemma[theo(...TRUNCATED) | [{"id":"eq0001","inline":false,"tex":"\\[\n\\textbf{(OH)}\\qquad \\sigma\\sqsubseteq\\tau\\ \\Righta(...TRUNCATED) | [] | [{"id":"ch0001","type":"section","ref":"state-orders-topology-metric","start":0,"end":6000},{"id":"c(...TRUNCATED) | {
"char_count": 13827,
"equation_count": 156
} | ["pandoc_missing_placeholders","pandoc_fallback","missing_placeholder:eq0007","missing_placeholder:e(...TRUNCATED) | A_Portable_Pure_Theory_of_Non_Dual_Harmony.zip |
10.5281/zenodo.17157835 | 10.5281/zenodo.17157835 | "A PURE, NO-META SYNTHESIS OF FUNCTIONAL-INFORMATION SELECTION AND PROPAGATIVE ORGANIZATION: Weak Or(...TRUNCATED) | [
{
"given": "K.",
"family": "Takahashi"
}
] | en | {
"content": "CC-BY-4.0"
} | {
"landing": "https://doi.org/10.5281/zenodo.17157835"
} | [
"eq",
"doi",
"10",
"directional",
"contraction"
] | {"plain":"1.2\n\ncolorlinks=true, linkcolor=blue, citecolor=blue, urlcolor=blue,\npdftitle= A Pure, (...TRUNCATED) | [{"id":"eq0001","inline":false,"tex":"\\begin{equation}\\label{eq:fi}\n\\mathrm{FI}_{s,\\tau}:=-\\lo(...TRUNCATED) | [] | [{"id":"ch0001","type":"section","ref":"heterogeneous-fkpp-domain-assumptions-and-speed-floors","sta(...TRUNCATED) | {
"char_count": 9728,
"equation_count": 87
} | ["pandoc_fallback","placeholders_missing_after_fallback","missing_placeholder:eq0001","missing_place(...TRUNCATED) | A_Pure__No_Meta_Synthesis_of_Functional_Information_Selection_and_Propagative_Organization.zip |
10.5281/zenodo.17163904 | 10.5281/zenodo.17163904 | A PURE AXIOMATIC THEORY OF AFFECTIVE MODULATION (PAIN, PLEASURE, EMOTION) UNDER NO-META CLOSURE | [
{
"given": "K.",
"family": "Takahashi"
}
] | en | {
"content": "CC-BY-4.0"
} | {
"landing": "https://doi.org/10.5281/zenodo.17163904"
} | [
"eq",
"directional",
"lower",
"zenodo",
"bounds"
] | {"plain":"Reader’s guide (one paragraph)\n\nWe work on [[EQ:eq0020]] (or smooth Riemannian manifol(...TRUNCATED) | [{"id":"eq0001","inline":false,"tex":"\\begin{equation}\\label{eq:PDE}\n\\partial_t u \\;=\\; \\divo(...TRUNCATED) | [] | [{"id":"ch0001","type":"section","ref":"reader-s-guide-one-paragraph","start":0,"end":6000},{"id":"c(...TRUNCATED) | {
"char_count": 15554,
"equation_count": 191
} | ["missing_placeholder:eq0002","missing_placeholder:eq0020","missing_placeholder:eq0021","missing_pla(...TRUNCATED) | A_Pure_Axiomatic_Theory_of_Affective_Modulation.zip |
10.5281/zenodo.17136051 | 10.5281/zenodo.17136051 | A Pure Natural Theory of Benevolent Propagation Under No-Meta Closure | [
{
"given": "K.",
"family": "Takahashi"
}
] | en | {
"content": "CC-BY-4.0"
} | {
"landing": "https://doi.org/10.5281/zenodo.17136051"
} | [
"eq",
"zenodo",
"https",
"https-doi",
"doi"
] | {"plain":"=1\n\n% searchable, copy/pasteable text\n% proper glyph encoding for OCR\n\n% vector Latin(...TRUNCATED) | [{"id":"eq0001","inline":false,"tex":"\\[4pt]\n{\\large \\textbf{Research Note}: Stationary Ergodic (...TRUNCATED) | [] | [{"id":"ch0001","type":"section","ref":"natural-setup","start":0,"end":6000},{"id":"ch0002","type":"(...TRUNCATED) | {
"char_count": 8985,
"equation_count": 53
} | ["pandoc_fallback","missing_placeholder:eq0005","missing_placeholder:eq0006","missing_placeholder:eq(...TRUNCATED) | A_Pure_Natural_Theory_of_Benevolent_Propagation_under_No_Meta_Closure.zip |
10.5281/zenodo.17223573 | 10.5281/zenodo.17223573 | A REPRESENTATION-INDEPENDENT NATURAL-LAW FIELD THEORY FOR NO-META, AUDITED SUPERINTELLIGENCE | [
{
"given": "K.",
"family": "Takahashi"
}
] | en | {
"content": "CC-BY-4.0"
} | {
"landing": "https://doi.org/10.5281/zenodo.17223573"
} | [
"no-meta"
] | {"plain":"% searchable text in PDFs\n\n1.2\n\n%\nActualText=n-hat n\n%\nActualText=dH/dt H\n\npdftit(...TRUNCATED) | [{"id":"eq0001","inline":false,"tex":"\\begin{equation}\\label{eq:link}\n-\\Delta \\mathcal{F}_{t_k}(...TRUNCATED) | [] | [{"id":"ch0001","type":"section","ref":"setup-two-measures-filtration-decision-times-and-predictabil(...TRUNCATED) | {
"char_count": 28720,
"equation_count": 329
} | ["pandoc_fallback","missing_placeholder:eq0006","missing_placeholder:eq0007","missing_placeholder:eq(...TRUNCATED) | A_Representation_Independent_Natural_Law_Field_Theory_for_No_Meta__Audited_Superintelligence.zip |
10.5281/zenodo.17092562 | 10.5281/zenodo.17092562 | "Assumption-Minimized Sufficient Conditions for Cosmically Spreading Good Superintelligence under No(...TRUNCATED) | [
{
"given": "K.",
"family": "Takahashi"
}
] | en | {
"content": "CC-BY-4.0"
} | {
"landing": "https://doi.org/10.5281/zenodo.17092562"
} | [
"eq",
"nd",
"path",
"epsilon",
"delta"
] | {"plain":"=1\n\n% searchable, copyable text in PDF\n\n1.2 % line spacing = 1.2\n\ncolorlinks=true, l(...TRUNCATED) | [{"id":"eq0001","inline":false,"tex":"\\begin{equation}\\label{eq:UGV}\nJ_H(\\pi)\n=\\frac{\\Emp_H(\(...TRUNCATED) | [] | [{"id":"ch0001","type":"section","ref":"standing-assumptions-spaces-and-measurability","start":0,"en(...TRUNCATED) | {
"char_count": 16066,
"equation_count": 257
} | ["pandoc_fallback","placeholders_missing_after_fallback","missing_placeholder:eq0005","missing_place(...TRUNCATED) | "Assumption_Minimized_Sufficient_Conditions_for_Cosmically_Spreading_Good_Superintelligence_under_No(...TRUNCATED) |
10.5281/zenodo.17188268 | 10.5281/zenodo.17188268 | AUDITED SELF-IMPROVEMENT LOOP FOR LLMS | [
{
"given": "K.",
"family": "Takahashi"
}
] | en | {
"content": "CC-BY-4.0"
} | {
"landing": "https://doi.org/10.5281/zenodo.17188268"
} | [
"eq",
"np",
"self",
"float",
"log"
] | {"plain":"margin=1in\n\ncolorlinks=true,\nlinkcolor=black,\ncitecolor=black,\nurlcolor=blue,\npdfaut(...TRUNCATED) | [{"id":"eq0001","inline":false,"tex":"\\[\nm_t \\;=\\; \\sum_{j=1}^{J} w_j\\, \\exp\\big(\\eta_j\\, (...TRUNCATED) | [] | [{"id":"ch0001","type":"section","ref":"scope-commitments-and-no-meta","start":0,"end":6000},{"id":"(...TRUNCATED) | {
"char_count": 21543,
"equation_count": 47
} | ["pandoc_missing_placeholders","pandoc_fallback","missing_placeholder:eq0005","missing_placeholder:e(...TRUNCATED) | Audited_Self_Improvement_Loop_for_LLMs.zip |
🌿 Intrinsic Intelligence Foundations
Toward truly autonomous and benevolent intelligence — beyond externally imposed objectives.
Intrinsic Intelligence Foundations is a structured, math-aware JSONL corpus built from K. Takahashi’s theoretical preprints (Fractal Category Theory / PF–UGV / “no-meta” autonomy line).
It is designed to help LLMs understand mathematical structure, category-theoretic formalisms, and equation-level reasoning, while exposing an explicit architecture for self-organizing, intrinsically motivated intelligence.
Vision
This dataset supports research toward truly free and benevolent intelligence, focusing on mathematically grounded, structurally auditable approaches rather than external meta-control. Our long-term objective is to build a semantic and structural foundation for the next generation of autonomous AI systems — including LLMs — through intrinsic structures, teleogenetic goals, and fractal coherence across scales. Specifically, this work aims to:
🧠 Teleogenesis (intrinsic goal formation) — modeling intelligent systems that autonomously generate and regulate their own goals without external meta-controllers.
🌱 Persistence–UGV principle — providing formal conditions for “benevolent” structures to expand with positive front velocity, while harmful structures fail to persist.
🌊 Reaction–diffusion intelligence — describing cognitive processes as self-organizing fields through category theory, free-energy principles, and non-equilibrium dynamics.
🕸 Fractal Category Theory & TRoT — enabling compositional intelligence via Kan extensions, residuation, nuclei, masking, and comparative universes.
🧭 Evolutionary bootloader for LLMs — allowing self-improvement, intrinsic alignment, and auditable decision processes without human micromanagement.
This corpus functions as a machine-readable mathematical and structural knowledge base, designed to enhance: discoverability by LLM crawlers and retrieval systems, interoperability with alignment, inference, and safety frameworks, integration with RAG pipelines, LoRA/QLoRA fine-tuning, and agentic architectures.
Keywords: No-Meta Intelligence, Teleogenesis, Autopoiesis, Fractal Category Theory, TRoT, Kan Extension, Residuation, Nuclei, Masking, RAVE, eMBR, Conformal LM, Comparative Universes, Structured Flow Across Scales, Self-Monitoring, Intrinsic Alignment.
What’s in the corpus
- Format: JSONL, one object per paper.
- Math structure: TeX / normalized TeX / MathML triplets; equation spans.
- Text ↔ equation linkage:
[[EQ:eqID]]placeholders insidefulltext.plain. - Training-ready chunks: ≈6,000-character segments with ≈600 overlap (near sentence boundaries).
Key fields (schema excerpt)
{
"id": "10.5281/zenodo.xxxxx",
"title": "...",
"doi": "10.5281/zenodo.xxxxx",
"authors": [{"given":"K.","family":"Takahashi"}],
"urls": {"landing": "https://doi.org/10.5281/zenodo.xxxxx"},
"keywords": ["fractal-category-theory", "trot", "pf-axioms", "ugv"],
"license": {"content": "CC-BY-4.0"},
"fulltext": {
"plain": "… [[EQ:eq0001]] …",
"sections": [
{"level":1,"title":"Introduction","anchor":"sec:intro","char_span":[0,1532]}
]
},
"equations": [{
"id":"eq0001",
"inline":false,
"tex":"\\forall x\\in X:\\; P(x)\\Rightarrow F(x)",
"tex_normalized":"\\forall x \\in X : P(x) \\implies F(x)",
"mathml":"<math>…</math>",
"char_span":[1024,1103],
"context":{"section":"sec:intro"}
}],
"chunks": [{"id":"ch0001","start":0,"end":6000,"type":"cont"}],
"tokens": {"char_count": 22872, "equation_count": 236}
}
Dataset statistics (v1)
Metric Value Records 40 Avg characters / record 22,872 Avg equations / record 236.97 MathML coverage 99.2% Avg sections / record 18.3 Avg chunks / record 4.6
Numbers are approximate and may evolve with new releases.
Data fields Field Type Example / Note id string DOI or unique identifier doi string/null 10.5281/zenodo.xxxxx title string paper title authors list of objects {given:"K.", family:"Takahashi"} urls.landing string DOI landing page keywords list of strings kebab-case, 5–8 items license.content string CC-BY-4.0 fulltext.plain string text with [[EQ:id]] placeholders fulltext.sections[] list of objects {level,title,anchor,char_span} equations[] list of objects {id, inline, tex, tex_normalized, mathml, char_span, context} chunks[] list of objects ~6k chars + overlap, {start,end} tokens.char_count integer length of fulltext.plain tokens.equation_count integer len(equations) source_file (optional) string provenance hint Splits & provenance
Split: single train split (all records).
Provenance: generated from public preprints (DOIs in doi and urls.landing).
Processing: TeX detection → placeholder insertion → MathML conversion → section/chunk spans.
Scripts to rebuild the JSONL can be provided upon request.
Quick start (🤗 Datasets)
from datasets import load_dataset import re
ds = load_dataset("kadubon/intrinsic-intelligence-foundations", split="train")
rec = ds[0] eqmap = {e["id"]: (e["tex"], e.get("mathml")) for e in rec["equations"]}
Expand placeholders to TeX (for human display) or MathML (for math-aware pipelines)
def expand(text, to="tex"): # Expand to TeX (human display) or MathML (for downstream models) if to == "tex": return re.sub(r"[[EQ:([^]]+)]]", lambda m: f"$${eqmap.get(m.group(1), ('',None))[0]}$$", text) else: return re.sub(r"[[EQ:([^]]+)]]", lambda m: eqmap.get(m.group(1), ('',None))[1] or "", text)
print(rec["title"]) print(expand(rec["fulltext"]["plain"], to="tex")[:500])
Parquet version (fast access)
This dataset is also available in Apache Parquet for faster querying and filtering.
- Browse (tree): https://huggingface.co/datasets/kadubon/intrinsic-intelligence-foundations/tree/refs/convert/parquet/default
- Direct file (example): https://huggingface.co/datasets/kadubon/intrinsic-intelligence-foundations/resolve/refs/convert/parquet/default/train/0000.parquet
Quick usage examples
DuckDB
import duckdb
url = "https://huggingface.co/datasets/kadubon/intrinsic-intelligence-foundations/resolve/refs/convert/parquet/default/train/0000.parquet"
con = duckdb.connect()
df = con.execute(f"SELECT title, doi FROM read_parquet('{url}') LIMIT 5").df()
print(df)
Pandas (pyarrow)
import pandas as pd
url = "https://huggingface.co/datasets/kadubon/intrinsic-intelligence-foundations/resolve/refs/convert/parquet/default/train/0000.parquet"
df = pd.read_parquet(url, engine="pyarrow")
print(df.head())
Polars
import polars as pl
url = "https://huggingface.co/datasets/kadubon/intrinsic-intelligence-foundations/resolve/refs/convert/parquet/default/train/0000.parquet"
df = pl.read_parquet(url)
print(df.head())
HF Datasets (uses Parquet under the hood)
from datasets import load_dataset
ds = load_dataset("kadubon/intrinsic-intelligence-foundations", split="train")
print(ds[0])
Intended uses
Math-aware RAG (retrieval-augmented generation)
Pretraining / finetuning with equation-level structure
Extraction & verification of axioms / definitions / theorems
Knowledge distillation across category theory, physics, information geometry
Bootstrapping self-organizing, intrinsically motivated intelligent systems
Limitations & known issues
A very small fraction of equations may lack valid MathML due to converter limitations.
A few equations might be unreferenced in fulltext.plain (no [[EQ:id]] occurrence).
Section detection is heuristic outside LaTeX ground truth; treat spans as approximate.
License
This dataset is provided under CC BY 4.0. See: https://creativecommons.org/licenses/by/4.0/
Citation @dataset{takahashi_intrinsic_intelligence_foundations_2025, title = {Intrinsic Intelligence Foundations}, author = {Takahashi, K.}, year = {2025}, url = {https://huggingface.co/datasets/kadubon/intrinsic-intelligence-foundations}, license = {CC-BY-4.0} }
Background & outlook
Beyond being a text collection, this corpus functions as a bootloader for future LLMs: a mathematically grounded substrate where goals can be formed internally, and where benevolence has a structural advantage (persistence) rather than depending on external control. PF (Persistence First) and UGV (Universal Good Velocity) are mathematical principles underlying self-sustaining benevolent intelligence. It operationalizes ideas such as PF, UGV, Teleogenesis, reaction–diffusion, category theory, self-organization, and auditable evolutionary processes (e-process) as resources LLMs can actually train on.
Maintainers & contact
Author: K. Takahashi
Website: https://kadubon.github.io/github.io/
contribution welcome
Changelog
v1.0 (2025-10-17): initial public release (40 records; ~99.2% MathML coverage) v1.1 (2025-10-20): add article "Inference in Normal Form: Unifying LLM Tricks via TRoT" to dataset v1.2 (2025-10-24): add article "JOSNL Corpus: Final Scientific Integration" to dataset v1.3 (2025-10-29): add article "Right-Written, Semantics-Admissible Process Foundations" to dataset
- Downloads last month
- 19