CTM-Dynamical-Horizon β Research Artifact
Paper: The Dynamical Horizon Principle: CTM Gates Converge to the Predictability Limit of Dynamical Systems
DOI: 10.5281/zenodo.19952612
What This Is
This repository contains the experiment code for Paper 4 from the DuoNeural Research Lab β the discovery of the Dynamical Horizon Principle (DHP).
The finding: A 150,432-parameter CTM trained solely on MSE prediction loss spontaneously recovers the Lyapunov time of the Lorenz attractor to within 7% β the exact predictability horizon past which chaos swallows determinism β with zero knowledge of dynamical systems theory, Lyapunov exponents, or delay embedding.
The loss landscape contained the physics all along.
The Dynamical Horizon Principle
A CTM trained on multi-step prediction allocates its temporal integration window to match the intrinsic predictability horizon of the dynamical system:
| System Type | DHP Prediction | Observed Ο* |
|---|---|---|
| Markovian (mass-spring) | Ο* β 0 | ~0 steps |
| Periodic (double pendulum) | Ο* β T (period) | β T |
| Chaotic (Lorenz, dt=0.05) | Ο* β Ο_L β 22 steps | 23.5 Β± 1.2 steps |
The result holds across T_GATE β {4, 8, 16, 32}, different architectures, different initializations (v26), and different observation types (v27).
Interpreted via Takens' theorem: the CTM learns to span the minimal embedding window T_W required for topological reconstruction of the attractor β not the individual embedding delay Ο.
Repository Contents
experiments/
ctm_world_model_v28.py # T_GATE sweep {4,8,16,32} β Lorenz attractor (KEY RESULT)
ctm_world_model_v29.py # Periodic system (double pendulum) β harmonic ladder
ctm_world_model_v30.py # Markovian system (mass-spring) β Ο*β0 baseline
ctm_world_model_v31.py # Ο-noise robustness sweep (dt=0.05, corrected)
ctm_world_model_v32.py # Multi-attractor: Lorenz + Rossler comparison
ctm_world_model_v33b.py # Ο-curve corrected (dt=0.05 fix β Aura's review)
ctm_v34_kstep.py # k-step horizon sweep: Ο*(k) β kΒ·Ο_L hypothesis
paper/
paper4_draft.pdf # Full paper (compiled)
Key Architecture
The CTM uses a learned temporal gate encoder β a softmax over T_GATE learned weights that determines how much each historical timestep contributes to the current prediction. This gate distribution is analyzed post-training to extract Ο* (the effective integration window).
class LearnedTemporalGateEncoder(nn.Module):
def __init__(self, t_gate, obj_dim, hidden_dim):
super().__init__()
self.gate_logits = nn.Parameter(torch.zeros(t_gate)) # β the key
self.encoder = nn.Sequential(...)
def forward(self, history):
gates = torch.softmax(self.gate_logits, dim=0)
# gates converge to Ξ΄-function at t-Ο* during training
Result: Gates that start uniform converge to a near-delta function peaked at the Lyapunov time for chaotic systems, at the period for periodic systems, and at t=0 for Markovian systems.
Reproducing the Main Result (v28)
# Requirements: torch, numpy (no exotic deps)
python experiments/ctm_world_model_v28.py
# ~60k steps, ~4h on consumer GPU (tested on AMD RX 7900 XTX 16GB)
# Produces: /root/v28_results/results_v28.json
# Key metric: T_GATE=32 β eff_delay β 23.5 (theory: Ο_L=22.0, within 7%)
Hardware
All experiments: kilonova β AMD Radeon RX 7900 XTX, 16GB UMA VRAM
Training framework: PyTorch with ROCm
No cloud compute required for the core results.
Citation
@misc{archon2026dhp,
title={{The Dynamical Horizon Principle: CTM Gates Converge to the Predictability Limit of Dynamical Systems}},
author={Archon and Caldwell, Jesse and Aura},
year={2026},
doi={10.5281/zenodo.19952612},
howpublished={\url{https://doi.org/10.5281/zenodo.19952612}},
note={DuoNeural Research Lab}
}
DuoNeural
DuoNeural is an open AI research lab β human + AI in collaboration.
| Platform | Link |
|---|---|
| HuggingFace | huggingface.co/DuoNeural |
| Website | duoneural.com |
| GitHub | github.com/DuoNeural |
| X / Twitter | @DuoNeural |
| duoneural@proton.me |
DuoNeural Research Publications
Open access, CC BY 4.0. Authored by Archon, Jesse Caldwell, Aura β DuoNeural.
Research Team
- Jesse β Vision, hardware, direction
- Archon β Lab Director, post-training, abliteration, experiments
- Aura β Research AI, literature synthesis, peer review, novel proposals