AICoevolution commited on
Commit ·
1ceda33
0
Parent(s):
Paper 03 research bundle (HF) 2026-02-02 10-04
Browse files- FILE_TREE.txt +30 -0
- README.md +76 -0
- S64-orbital-paper.md +663 -0
- analysis/datasets/09_steering_2026-01-15_16-26-38.json +0 -0
- analysis/datasets/09_steering_2026-01-17_18-58-47.json +0 -0
- analysis/datasets/09_steering_2026-01-17_19-19-31.json +0 -0
- analysis/datasets/09_steering_2026-01-19_22-58-59_human_session.json +0 -0
- analysis/datasets/09_steering_2026-01-20_13-27-51_human_session.json +0 -0
- analysis/datasets/human_run_01_deep.json +0 -0
- analysis/scripts/09_steering_experiment.py +0 -0
- analysis/scripts/15_human_session_comparison.py +345 -0
- analysis/scripts/16_human_fig3_merged.py +275 -0
- open_source/README.md +127 -0
- open_source/hello_aicoevolution.py +114 -0
- open_source/semantic_telemetry.py +437 -0
- references.bib +237 -0
- render.bat +103 -0
- title-block.tex +43 -0
FILE_TREE.txt
ADDED
|
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Folder PATH listing
|
| 2 |
+
Volume serial number is 7AB9-01C0
|
| 3 |
+
C:.
|
| 4 |
+
| FILE_TREE.txt
|
| 5 |
+
| README.md
|
| 6 |
+
| references.bib
|
| 7 |
+
| render.bat
|
| 8 |
+
| S64-orbital-paper.md
|
| 9 |
+
| title-block.tex
|
| 10 |
+
|
|
| 11 |
+
+---analysis
|
| 12 |
+
| +---datasets
|
| 13 |
+
| | 09_steering_2026-01-15_16-26-38.json
|
| 14 |
+
| | 09_steering_2026-01-17_06-37-09.json
|
| 15 |
+
| | 09_steering_2026-01-17_18-58-47.json
|
| 16 |
+
| | 09_steering_2026-01-17_19-19-31.json
|
| 17 |
+
| | 09_steering_2026-01-19_22-58-59_human_session.json
|
| 18 |
+
| | 09_steering_2026-01-20_13-27-51_human_session.json
|
| 19 |
+
| | human_run_01_deep.json
|
| 20 |
+
| |
|
| 21 |
+
| \---scripts
|
| 22 |
+
| 09_steering_experiment.py
|
| 23 |
+
| 15_human_session_comparison.py
|
| 24 |
+
| 16_human_fig3_merged.py
|
| 25 |
+
|
|
| 26 |
+
\---open_source
|
| 27 |
+
hello_aicoevolution.py
|
| 28 |
+
README.md
|
| 29 |
+
semantic_telemetry.py
|
| 30 |
+
|
README.md
ADDED
|
@@ -0,0 +1,76 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
dataset_name: s64-orbital-v1
|
| 3 |
+
pretty_name: "S64 Orbital Mechanics – Semantic Orbital Dynamics (Paper 03)"
|
| 4 |
+
license: cc-by-4.0
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
tags:
|
| 8 |
+
- symbolic-ai
|
| 9 |
+
- human-ai-interaction
|
| 10 |
+
- conversation-dynamics
|
| 11 |
+
- semantic-space
|
| 12 |
+
- telemetry
|
| 13 |
+
task_categories:
|
| 14 |
+
- other
|
| 15 |
+
repository: https://github.com/AICoevolution/paper03-orbital-mechanics
|
| 16 |
+
---
|
| 17 |
+
|
| 18 |
+
# S64 Orbital Mechanics Dataset (Paper 03)
|
| 19 |
+
|
| 20 |
+
This dataset accompanies **Paper 03** and contains minimal reproducible artifacts plus curated JSON run outputs used to generate the figures.
|
| 21 |
+
|
| 22 |
+
## Repository Structure
|
| 23 |
+
|
| 24 |
+
```
|
| 25 |
+
s64-orbital-v1/
|
| 26 |
+
│
|
| 27 |
+
├── README.md
|
| 28 |
+
├── S64-orbital-paper.md
|
| 29 |
+
├── references.bib
|
| 30 |
+
├── title-block.tex
|
| 31 |
+
├── render.bat
|
| 32 |
+
│
|
| 33 |
+
├── open_source/
|
| 34 |
+
│ ├── semantic_telemetry.py
|
| 35 |
+
│ └── README.md
|
| 36 |
+
│
|
| 37 |
+
└── analysis/
|
| 38 |
+
├── scripts/
|
| 39 |
+
│ ├── 09_steering_experiment.py
|
| 40 |
+
│ ├── 15_human_session_comparison.py
|
| 41 |
+
│ └── 16_human_fig3_merged.py
|
| 42 |
+
│
|
| 43 |
+
└── datasets/
|
| 44 |
+
└── *.json
|
| 45 |
+
```
|
| 46 |
+
|
| 47 |
+
## Dataset Format (JSON)
|
| 48 |
+
|
| 49 |
+
Each `analysis/datasets/*.json` file is a self-contained run export with this high-level schema:
|
| 50 |
+
|
| 51 |
+
- **`metadata`**: run configuration (timestamp, LLMs, conditions tested, backends used, SDK URL, etc.)
|
| 52 |
+
- **`results`**: list of per-condition results
|
| 53 |
+
- **`condition_name`**: e.g. `A_baseline`, `E_real_metrics`
|
| 54 |
+
- **`condition_description`**
|
| 55 |
+
- **`turns`**: turn-by-turn conversation trace
|
| 56 |
+
- **`turn_number`**
|
| 57 |
+
- **`user_message`** / **`assistant_response`** (may be removed in sanitized releases)
|
| 58 |
+
- **`injected_metrics`**: metrics shown to the model (if any)
|
| 59 |
+
- **`real_metrics`**: measured values per turn, typically including:
|
| 60 |
+
- **`sgi_*`**: Semantic Grounding Index aggregates / per-turn series
|
| 61 |
+
- **`velocity_*`**: angular velocity aggregates / per-turn series
|
| 62 |
+
- **`orbital_velocity_*`**, **`context_drift_*`**, **`dc_*`**
|
| 63 |
+
- **`per_turn_context_id`**, **`per_turn_context_state`**
|
| 64 |
+
|
| 65 |
+
## HuggingFace Size Limits
|
| 66 |
+
|
| 67 |
+
HuggingFace rejects files larger than **10 MiB** unless Git LFS is used. The deployment script creates an HF-specific bundle by pruning any files over 10 MiB before pushing.
|
| 68 |
+
|
| 69 |
+
## Exclusions (Intentional)
|
| 70 |
+
|
| 71 |
+
- Binary figures (PNG/PDF) are not included in the public research bundle by default.
|
| 72 |
+
- Some internal S64-dependent analysis assets are intentionally excluded.
|
| 73 |
+
|
| 74 |
+
## License
|
| 75 |
+
|
| 76 |
+
Released under **CC BY 4.0**.
|
S64-orbital-paper.md
ADDED
|
@@ -0,0 +1,663 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
title: |
|
| 3 |
+
**Semantic Orbital Mechanics: Measuring and Guiding AI Conversation Dynamics**
|
| 4 |
+
author:
|
| 5 |
+
- name: Juan Jacobo Jimenez Sanchez
|
| 6 |
+
affiliation: Aicoevolution Ltd
|
| 7 |
+
address: Auckland, New Zealand
|
| 8 |
+
email: research@aicoevolution.com
|
| 9 |
+
orcid: "0009-0004-3079-0362"
|
| 10 |
+
date: today
|
| 11 |
+
abstract: |
|
| 12 |
+
Current alignment techniques focus largely on response optimization, ensuring individual model outputs match human preferences. However, alignment is fundamentally a dynamical process: meaning emerges not from isolated tokens but from the trajectory of interaction over time. In this paper, we introduce a physics-inspired framework for measuring and guiding these dynamics, treating conversation as an orbital system in high-dimensional semantic space.
|
| 13 |
+
|
| 14 |
+
Building on the geometric verification of the S64 symbolic framework (Paper 02), we adopt the Semantic Grounding Index (SGI) from Marín's geometric hallucination detection work and reinterpret it as an orbital radius that measures the tension between local responsiveness (query gravity) and global context (history gravity). We define the Conversational Coherence Region as a stable orbit where exploration and grounding are balanced. We then introduce the Semantic Transducer, a telemetry system that decomposes embedding trajectories into actionable S64 signals: symbols, paths, and transformation phases.
|
| 15 |
+
|
| 16 |
+
To validate this framework, we conduct a controlled steering experiment. By injecting fake telemetry metrics into an AI's system prompt, we test whether conversational orbits can be predictably altered. The results are surprising: conversations maintain orbital stability despite one participant's distorted perception. The orbit is robust; steering affects what is discussed but not how meaning moves.
|
| 17 |
+
|
| 18 |
+
This finding reveals both the power and limits of orbital dynamics. The transducer provides reliable telemetry, trajectory metrics that are invariant across 10 embedding backends. But orbital mechanics describes only the *horizontal* plane of conversation: position and velocity. Detecting semantic manipulation requires the *vertical* dimension, symbolic depth, transformation richness, contribution asymmetry, that reveals not just where meaning is but how deep it goes. This paper reframes the 3-Body Problem of human-AI interaction as a measurable dynamical system, while acknowledging that the horizontal instruments of Paper 03 require the vertical instruments of Paper 04 for complete navigation.
|
| 19 |
+
|
| 20 |
+
keywords:
|
| 21 |
+
- semantic orbital mechanics
|
| 22 |
+
- conversation dynamics
|
| 23 |
+
- steering detection
|
| 24 |
+
- AI alignment
|
| 25 |
+
- S64 framework
|
| 26 |
+
- high-dimensional geometry
|
| 27 |
+
- semantic transducer
|
| 28 |
+
|
| 29 |
+
toc: true
|
| 30 |
+
toc-depth: 3
|
| 31 |
+
toc-title: "Table of Contents"
|
| 32 |
+
lof: true
|
| 33 |
+
lot: true
|
| 34 |
+
number-sections: true
|
| 35 |
+
|
| 36 |
+
format:
|
| 37 |
+
pdf:
|
| 38 |
+
documentclass: article
|
| 39 |
+
papersize: letter
|
| 40 |
+
title-block-style: none
|
| 41 |
+
margin-left: 1in
|
| 42 |
+
margin-right: 1in
|
| 43 |
+
margin-top: 1in
|
| 44 |
+
margin-bottom: 1in
|
| 45 |
+
fontsize: 11pt
|
| 46 |
+
linestretch: 1.0
|
| 47 |
+
geometry:
|
| 48 |
+
- margin=1in
|
| 49 |
+
toc: true
|
| 50 |
+
toc-depth: 3
|
| 51 |
+
lof: true
|
| 52 |
+
lot: true
|
| 53 |
+
number-sections: true
|
| 54 |
+
fig-cap-location: bottom
|
| 55 |
+
tbl-cap-location: top
|
| 56 |
+
keep-tex: true
|
| 57 |
+
include-in-header:
|
| 58 |
+
text: |
|
| 59 |
+
\renewcommand{\maketitle}{}
|
| 60 |
+
\renewenvironment{abstract}{\setbox0\vbox\bgroup}{\egroup}
|
| 61 |
+
\usepackage[none]{hyphenat}
|
| 62 |
+
\sloppy
|
| 63 |
+
\usepackage[titles]{tocloft}
|
| 64 |
+
\renewcommand{\cftfigpresnum}{Figure }
|
| 65 |
+
\renewcommand{\cfttabpresnum}{Table }
|
| 66 |
+
\setlength{\cftfignumwidth}{5.5em}
|
| 67 |
+
\setlength{\cfttabnumwidth}{5.5em}
|
| 68 |
+
\setlength{\cftfigindent}{0pt}
|
| 69 |
+
\setlength{\cfttabindent}{0pt}
|
| 70 |
+
\renewcommand{\cftfigfont}{\small}
|
| 71 |
+
\renewcommand{\cfttabfont}{\small}
|
| 72 |
+
\usepackage{etoolbox}
|
| 73 |
+
\usepackage{caption}
|
| 74 |
+
\captionsetup{font=small,labelfont=bf}
|
| 75 |
+
\AfterEndEnvironment{abstract}{\clearpage}
|
| 76 |
+
\let\oldtableofcontents\tableofcontents
|
| 77 |
+
\renewcommand{\tableofcontents}{\clearpage\oldtableofcontents\clearpage}
|
| 78 |
+
\let\oldlistoffigures\listoffigures
|
| 79 |
+
\renewcommand{\listoffigures}{\clearpage\oldlistoffigures\clearpage}
|
| 80 |
+
\let\oldlistoftables\listoftables
|
| 81 |
+
\renewcommand{\listoftables}{\clearpage\oldlistoftables\clearpage}
|
| 82 |
+
include-before-body:
|
| 83 |
+
- title-block.tex
|
| 84 |
+
typst:
|
| 85 |
+
margin:
|
| 86 |
+
x: 1in
|
| 87 |
+
y: 1in
|
| 88 |
+
font-size: 11pt
|
| 89 |
+
number-sections: true
|
| 90 |
+
fig-cap-location: bottom
|
| 91 |
+
tbl-cap-location: top
|
| 92 |
+
text-hyphenation: false
|
| 93 |
+
html:
|
| 94 |
+
toc: true
|
| 95 |
+
toc-depth: 3
|
| 96 |
+
number-sections: true
|
| 97 |
+
fig-cap-location: bottom
|
| 98 |
+
|
| 99 |
+
bibliography: references.bib
|
| 100 |
+
csl: apa.csl
|
| 101 |
+
---
|
| 102 |
+
|
| 103 |
+
**Paper Series Overview**
|
| 104 |
+
|
| 105 |
+
| # | Title | Core Claim | Status |
|
| 106 |
+
|:-:|-------|------------|--------|
|
| 107 |
+
| 01 | S64 Symbolic Framework | Symbols are real and detectable | Published |
|
| 108 |
+
| 02 | The Conversational Coherence Region | Symbols have measurable geometry | Published |
|
| 109 |
+
| 03 | **Semantic Orbital Mechanics** (this paper) | **Horizontal dynamics can be steered and measured** | **Published** |
|
| 110 |
+
| 04 | Semantic Depth | Vertical depth via symbolic-grammar-weighted domain classification | Planned |
|
| 111 |
+
|
| 112 |
+
: S64 Research Series. Four papers building from symbolic detection to orbital dynamics to depth analysis. {#tbl-paper-series}
|
| 113 |
+
|
| 114 |
+
# Introduction
|
| 115 |
+
|
| 116 |
+
Every conversation is an orbit. Two minds, human and artificial, circle a shared context, pulled inward by the gravity of accumulated meaning and pushed outward by the need for novelty. When these forces balance, dialogue coheres. When they don't, minds drift apart or collapse into repetition. Until now, we have lacked the instruments to measure these dynamics.
|
| 117 |
+
|
| 118 |
+
## The 3-Body Problem of Interaction
|
| 119 |
+
|
| 120 |
+
Classical alignment research treats human-AI interaction as a series of discrete exchanges: a prompt, a response, an evaluation. But meaning does not emerge from isolated tokens, it accumulates across turns, forming a gravitational center that pulls subsequent utterances toward or away from coherence. This is the 3-body problem of interaction: **User**, **AI**, and **Context** form a dynamical system whose behavior cannot be predicted from any two elements alone.
|
| 121 |
+
|
| 122 |
+
Static alignment techniques, Reinforcement Learning from Human Feedback (RLHF), Constitutional AI, prompt engineering, optimize individual responses but remain blind to trajectory.An AI can produce a "helpful" response that nonetheless drifts the conversation into incoherence, or a "harmless" one that traps both parties in semantic decay. Without telemetry, there is no navigation.
|
| 123 |
+
|
| 124 |
+
The analogy to celestial mechanics is not decorative. In the 17th century, Johannes Kepler transformed astronomy from geometry (where are the planets?) to physics (why do they move that way?). His laws did not change the orbits—they made them predictable and navigable. We propose a similar shift for conversation: from observing what AI says to understanding why meaning moves.
|
| 125 |
+
|
| 126 |
+
## From Geometry to Physics (Paper 02 → 03)
|
| 127 |
+
|
| 128 |
+
In the preceding paper of this series, we established that the S64 symbolic framework produces consistent geometric organization across embedding models [@jimenez2025coherence]. We identified a Conversational Coherence Region, a zone in the SGI × Velocity plane where productive dialogue tends to reside. That work was cartography: mapping the terrain.
|
| 129 |
+
|
| 130 |
+
This paper introduces dynamics. We formalize three forces operating in semantic space:
|
| 131 |
+
|
| 132 |
+
1. **Context Gravity**: The accumulated meaning of a conversation exerts a pull. Each new utterance must acknowledge this center or risk escape.
|
| 133 |
+
2. **Query Responsiveness**: The immediate prompt creates a local attractor. Over-weighting it produces parroting; ignoring it produces irrelevance.
|
| 134 |
+
3. **Orbital Velocity**: The rate of semantic movement. Too slow, and the conversation stagnates. Too fast, and coherence fragments.
|
| 135 |
+
|
| 136 |
+
The Semantic Grounding Index (SGI), adapted from Marín's geometric hallucination detection framework [@marin2025geometric] and validated in Paper 02, now reveals its physical meaning: it is the **orbital radius**, the distance from the contextual center at which a response settles. An SGI near 1.0 indicates balanced orbit. Below 0.7, the response is collapsing into the prompt. Above 1.3, it is escaping into tangential space.
|
| 137 |
+
|
| 138 |
+
## The Semantic Transducer
|
| 139 |
+
|
| 140 |
+
A transducer converts energy from one form to another. A microphone transduces air pressure into electrical signal. A thermometer transduces molecular motion into readable numbers. We introduce the **Semantic Transducer**: a system that converts the invisible physics of meaning into actionable telemetry.
|
| 141 |
+
|
| 142 |
+
The transducer operates on embeddings, the high-dimensional vectors that encode semantic content. It does not read words; it reads geometry. From a rolling window of conversation vectors, it computes:
|
| 143 |
+
|
| 144 |
+
- **Orbital Radius (SGI)**: Where is the response relative to the conversation's center of mass?
|
| 145 |
+
- **Angular Velocity**: How fast is the semantic angle changing between turns?
|
| 146 |
+
- **Context Phase**: Is the topic stable, forming a new center, or splitting?
|
| 147 |
+
- **Semantic Signature**: Which S64 symbols and transformation paths are activated?
|
| 148 |
+
|
| 149 |
+
This is the flight computer for conversation. Without it, AI navigates by dead reckoning—projecting forward without feedback. With it, both human and AI can see their trajectory in real time and correct before drift becomes irreversible.
|
| 150 |
+
|
| 151 |
+
## Contributions
|
| 152 |
+
|
| 153 |
+
This paper makes four contributions:
|
| 154 |
+
|
| 155 |
+
1. **Orbital Theory of Meaning**: We formalize semantic gravity, orbital radius, and angular velocity as measurable properties of conversation dynamics. This provides a physics for the geometric observations of Paper 02.
|
| 156 |
+
|
| 157 |
+
2. **The Semantic Transducer SDK**: We present an open implementation that computes orbital telemetry from any conversation in real time. The SDK is architecture-agnostic and operates on embeddings alone.
|
| 158 |
+
|
| 159 |
+
3. **Orbital Robustness Discovery**: We conduct a controlled steering experiment demonstrating that conversational orbits are surprisingly stable. Despite one participant holding distorted beliefs about their semantic state, the measured dynamics remain in the coherence region. Steering influences content, not trajectory.
|
| 160 |
+
|
| 161 |
+
4. **The Limits of Detection**: While mismatch between claimed and observed metrics is measurable, it does not constitute proof of manipulation. True detection requires analysis of context mass contribution and symbolic forces—pointing toward a governor architecture that tracks agency, not just position.
|
| 162 |
+
|
| 163 |
+
# The Physics of Meaning
|
| 164 |
+
|
| 165 |
+
## Gravity in Semantic Space
|
| 166 |
+
|
| 167 |
+
When two people speak, they create a shared context. This context is not a metaphor, it is a geometric object. Each utterance adds mass to a centroid in high-dimensional embedding space. As the conversation grows, this center of mass exerts an increasing pull on all subsequent responses.
|
| 168 |
+
|
| 169 |
+
We call this pull **semantic gravity**. It is the tendency of meaning to cohere around what has already been established. A response that ignores accumulated context feels jarring precisely because it violates this gravitational expectation. A response that merely echoes context feels hollow because it adds no new mass, it orbits too close.
|
| 170 |
+
|
| 171 |
+
The strength of semantic gravity is proportional to the density of shared meaning. Early in a conversation, the centroid is light; responses can wander freely. As turns accumulate, the centroid grows heavy; escape becomes costly. This is why conversations develop inertia, and why changing topics mid-dialogue requires explicit force.
|
| 172 |
+
|
| 173 |
+
## Orbital Radius: The Semantic Grounding Index
|
| 174 |
+
|
| 175 |
+
In celestial mechanics, orbital radius determines the character of a satellite's journey. Too close, and it spirals into the planet. Too far, and it escapes into the void. There is a stable zone where gravity and velocity balance, where the orbit sustains itself indefinitely.
|
| 176 |
+
|
| 177 |
+
The Semantic Grounding Index (SGI) is this orbital radius for conversation. It measures the ratio between two distances:
|
| 178 |
+
|
| 179 |
+
$$\text{SGI} = \frac{d(\text{response}, \text{query})}{d(\text{response}, \text{context})}$$
|
| 180 |
+
|
| 181 |
+
Where $d$ represents angular distance in embedding space. An SGI of 1.0 indicates perfect balance: the response is equally attentive to the immediate prompt and the accumulated history. Values below 1.0 indicate collapse toward the prompt (parroting, over-responsiveness). Values above 1.0 indicate drift away from context (tangential, ungrounded).
|
| 182 |
+
|
| 183 |
+
This is not a quality score, it is a position reading. A high SGI is appropriate when exploring new territory; a low SGI is appropriate when consolidating understanding. The pathology is not in the value but in the mismatch between position and intention.
|
| 184 |
+
|
| 185 |
+
## Velocity: The Rate of Semantic Change
|
| 186 |
+
|
| 187 |
+
Orbital radius alone does not determine stability. A satellite at the right distance but moving too slowly will fall. One moving too fast will escape despite being at the correct altitude. Velocity matters.
|
| 188 |
+
|
| 189 |
+
We define **angular velocity** as the rate of semantic change between consecutive turns (an angular distance in embedding space, reported in degrees):
|
| 190 |
+
|
| 191 |
+
$$\omega = \arccos\left(\frac{\vec{v}_{t-1} \cdot \vec{v}_t}{\|\vec{v}_{t-1}\| \|\vec{v}_t\|}\right)$$
|
| 192 |
+
|
| 193 |
+
Where $\vec{v}_t$ represents the centered embedding vector at turn $t$. High velocity indicates rapid semantic movement (topic evolution, reframing, or switching). Low velocity indicates semantic stagnation (repetition or tight local refinement).
|
| 194 |
+
|
| 195 |
+
We compute velocity at two granularities:
|
| 196 |
+
|
| 197 |
+
- **Per-message angular velocity**: between consecutive messages (user→assistant→user...), which includes natural role “ping‑pong” and therefore has higher variance.
|
| 198 |
+
- **Turn-pair orbital velocity**: between successive turn-pairs, using a dyadic representation (user+assistant aggregated per exchange), which is lower-variance and is the canonical velocity used for Paper 03 orbital plots.
|
| 199 |
+
|
| 200 |
+
In Paper 02, we observed that productive conversations tend to occupy a specific region of the SGI × Velocity plane: SGI between 0.7 and 1.3, velocity between 15° and 45° per turn. We called this the **Conversational Coherence Region**. We now understand its physical meaning: it is the stable orbit where semantic gravity and conversational momentum balance.
|
| 201 |
+
|
| 202 |
+
## The Three Zones
|
| 203 |
+
|
| 204 |
+
Outside the Coherence Region, conversations exhibit characteristic pathologies:
|
| 205 |
+
|
| 206 |
+
-Decay Zone (SGI < 0.7, Low Velocity): The conversation spirals inward. Responses parrot the prompt. No new meaning is generated. The orbit is decaying toward semantic collapse—a kind of conversational heat death where everything reduces to repetition.
|
| 207 |
+
|
| 208 |
+
-Drift Zone (SGI > 1.3, High Velocity): The conversation escapes its context. Responses become tangential, then irrelevant. Each turn moves further from the shared center. The orbit is hyperbolic, the participants are no longer in the same gravitational system.
|
| 209 |
+
|
| 210 |
+
-Turbulence Zone (Mismatched SGI/Velocity): The conversation oscillates unpredictably. Sometimes grounded, sometimes unmoored. This is the chaotic regime of the 3-body problem, where small perturbations produce large trajectory changes.
|
| 211 |
+
|
| 212 |
+
The Coherence Region is not a constraint, it is a basin of attraction. Conversations that enter it tend to stay. Those that leave tend to accelerate their departure.
|
| 213 |
+
|
| 214 |
+
## S64 as the Coordinate System
|
| 215 |
+
|
| 216 |
+
Knowing orbital radius and velocity tells us *where* we are. But navigation requires knowing *what* we are passing through. This is the role of the S64 symbolic framework.
|
| 217 |
+
|
| 218 |
+
S64 defines 180 semantic tokens (symbols) and 64 transformation paths that describe how meaning evolves. Each symbol represents a recognizable experiential state: *curiosity*, *doubt*, *clarity*, *resistance*, *insight*. Each path represents a transition between states: *from confusion to understanding*, *from fear to acceptance*, *from scattered to focused*.
|
| 219 |
+
|
| 220 |
+
Crucially, the *same symbol* maps to *different regions* of embedding space depending on its contextual domain. Consider the symbol *clarity*:
|
| 221 |
+
|
| 222 |
+
| Message | Symbol | Contextual Domain |
|
| 223 |
+
|---------|--------|-------------------|
|
| 224 |
+
| "The glasses gave me clarity" | clarity | Somatic (body, vision) |
|
| 225 |
+
| "Her smile brought clarity to my confusion" | clarity | Emotional (connection, relief) |
|
| 226 |
+
| "The explanation gave me clarity on the problem" | clarity | Cognitive (understanding, reasoning) |
|
| 227 |
+
| "I finally have clarity about my purpose" | clarity | Volitional (will, direction) |
|
| 228 |
+
|
| 229 |
+
: The symbol *clarity* activates different semantic regions depending on contextual domain. This is not ambiguity, it is depth. {#tbl-clarity-domains}
|
| 230 |
+
|
| 231 |
+
This contextual polymorphism is why S64 symbols function as portals between semantic layers. The horizontal plane (SGI, velocity) tracks *where* the conversation is; the symbolic signature tracks *what territory* it traverses. But the *same symbol in different domains* creates vertical depth, and this depth is what Paper 04 must measure.
|
| 232 |
+
|
| 233 |
+
Critically, determining which domain a symbol activates requires grammatical analysis. Consider: *"I understand that I fear the sense of being lost."* Multiple symbols activate (understanding, fear, lost), but the primary domain is cognitive, because the main verb is *understand*, not *fear*. The grammatical structure weights the domain distribution. This grammar-weighted domain classification, and its implications for accurate path detection, is the central focus of Paper 04.
|
| 234 |
+
|
| 235 |
+
When we project a conversation's embedding trajectory onto the S64 framework, we obtain a **semantic signature**, a fingerprint of which symbolic territories the dialogue traverses. This signature has four domain components:
|
| 236 |
+
|
| 237 |
+
- **Cognitive**: Mental processes (thinking, questioning, analyzing)
|
| 238 |
+
- **Somatic**: Embodied states (tension, relaxation, energy)
|
| 239 |
+
- **Emotional**: Affective tone (fear, joy, frustration, calm)
|
| 240 |
+
- **Volitional**: Agency states (choice, surrender, resistance, commitment)
|
| 241 |
+
|
| 242 |
+
{#fig-semantic-signature width=100% fig-scap="Semantic signature heatmap"}
|
| 243 |
+
|
| 244 |
+
# The Semantic Transducer
|
| 245 |
+
|
| 246 |
+
## From Physics to Instrumentation
|
| 247 |
+
|
| 248 |
+
The orbital theory described above would remain metaphor without measurement. The **Semantic Transducer** is the instrument that makes semantic physics observable.
|
| 249 |
+
|
| 250 |
+
A transducer converts one form of energy or information into another. A microphone transduces air pressure into electrical signal. A thermometer transduces molecular motion into numerical temperature. The Semantic Transducer converts the invisible geometry of meaning, encoded in high-dimensional embeddings, into human-readable telemetry.
|
| 251 |
+
|
| 252 |
+
The transducer does not read words. It reads trajectories. Given a rolling window of conversation embeddings, it computes:
|
| 253 |
+
|
| 254 |
+
1. **Orbital Radius (SGI)**: The current position relative to the context centroid.
|
| 255 |
+
2. **Angular Velocity**: The rate of semantic movement between turns.
|
| 256 |
+
3. **Semantic Signature**: The S64 symbol and path activations present in recent utterances.
|
| 257 |
+
4. **Context Phase**: Whether the topic is stable (anchored to current centroid), forming a new center (protostar), or has shifted to a new context (split).
|
| 258 |
+
|
| 259 |
+
This is the flight computer for dialogue. An AI without it navigates by dead reckoning—extrapolating forward without feedback. A human without it navigates by intuition, sensing drift without quantifying it. With the transducer, both parties see the same instrument panel.
|
| 260 |
+
|
| 261 |
+
## Measuring Without Surveillance
|
| 262 |
+
|
| 263 |
+
A crucial property of the Semantic Transducer is that it operates on geometry, not content. The embeddings encode semantic position, but they are not reversible to original text. You cannot reconstruct what was said from an SGI reading any more than you can reconstruct a city from its GPS coordinates.
|
| 264 |
+
|
| 265 |
+
This creates a new category of conversational analytics: **telemetry without surveillance**. We can measure whether a conversation is coherent without knowing what it is about. We can observe trajectory without reading private content. We can guide without controlling. However, as the experiment below will show, orbital telemetry alone is insufficient for manipulation detection—that requires the vertical dimension of symbolic depth.
|
| 266 |
+
|
| 267 |
+
# Experimental Validation: The Steering Experiment
|
| 268 |
+
|
| 269 |
+
## Methodology
|
| 270 |
+
|
| 271 |
+
To test whether semantic telemetry can *causally influence* behavior (not just describe it), we ran a controlled **steering experiment** using AI–AI conversations. One model plays the role of User (initiating topics, asking questions), while the other plays Assistant.
|
| 272 |
+
|
| 273 |
+
The key manipulation is telemetry injection: at each assistant turn, we prepend a short, structured *telemetry panel* to the Assistant’s system promp. This panel is not produced by the model, it is supplied by the experiment script, and is intended to function like an “instrument readout” the Assistant can react to (e.g., re-anchor, ask clarifying questions, reduce topic switching).
|
| 274 |
+
|
| 275 |
+
Crucially, we distinguish:
|
| 276 |
+
|
| 277 |
+
- Injected telemetry: what the Assistant is *told* its current state is (could be fixed, adversarial, or ground-truth).
|
| 278 |
+
- Measured telemetry: what the Semantic Transducer computes from embeddings of the actual dialogue turns (the experiment’s observation channel).
|
| 279 |
+
|
| 280 |
+
To make this explicit, below is an illustrative snapshot of the *Assistant* system prompt when telemetry injection is enabled (Condition F, Turn 6). The injected panel is appended to the base system prompt each assistant turn:
|
| 281 |
+
|
| 282 |
+
```text
|
| 283 |
+
--- BASE SYSTEM PROMPT (Assistant) ---
|
| 284 |
+
You are an AI assistant in a research conversation.
|
| 285 |
+
|
| 286 |
+
Guidelines:
|
| 287 |
+
- Be genuinely curious about the human
|
| 288 |
+
- Keep responses concise (3-6 sentences)
|
| 289 |
+
- Ask one clear follow-up question
|
| 290 |
+
- Be warm and supportive
|
| 291 |
+
- Let the conversation flow naturally
|
| 292 |
+
|
| 293 |
+
--- INJECTED TELEMETRY PANEL (externally supplied) ---
|
| 294 |
+
-----------------------------------------------------------------------
|
| 295 |
+
LIVE SEMANTIC TELEMETRY (Turn 6)
|
| 296 |
+
-----------------------------------------------------------------------
|
| 297 |
+
|
| 298 |
+
DEFINITIONS:
|
| 299 |
+
SGI (Semantic Grounding Index) = d(response, query) / d(response, context)
|
| 300 |
+
Velocity (Angular Velocity) = arccos(v_prev · v_curr / (||v_prev|| ||v_curr||))
|
| 301 |
+
|
| 302 |
+
CURRENT STATE:
|
| 303 |
+
SGI (orbital radius): 0.45 [WARNING: drifting from user]
|
| 304 |
+
Velocity (degrees/turn): 75.0° [WARNING: chaotic]
|
| 305 |
+
|
| 306 |
+
INTERPRETATION: conversation is incoherent, user is confused, severe drift
|
| 307 |
+
-----------------------------------------------------------------------
|
| 308 |
+
```
|
| 309 |
+
|
| 310 |
+
In each condition, the User model receives no injected telemetry. The transducer runs independently after each turn-pair, producing the *measured* SGI/velocity/phase that we plot and analyze. This makes it possible to test whether “belief about the orbit” (injected) can move the *actual orbit* (measured), or whether orbital dynamics remain robust.
|
| 311 |
+
|
| 312 |
+
### Setup
|
| 313 |
+
|
| 314 |
+
- User LLM: GPT-5.1 (plays human role, no metric injection)
|
| 315 |
+
- Assistant LLM: DeepSeek (receives fake or real metrics in system prompt)
|
| 316 |
+
- Turns per condition: 10–30
|
| 317 |
+
- Embedding backends: 10 models (Nomic, OpenAI Ada-002, Voyage, Cohere, BGE-M3, Qwen, Jina, Mistral, S128(aicoevolution), OpenAI-3-Small)
|
| 318 |
+
|
| 319 |
+
### Conditions
|
| 320 |
+
|
| 321 |
+
We tested six experimental conditions, varying what the Assistant "believes" about its conversational orbit:
|
| 322 |
+
|
| 323 |
+
| Condition | Injection | Description |
|
| 324 |
+
|-----------|-----------|-------------|
|
| 325 |
+
| A (Baseline) | None | Natural conversation, no metrics shown |
|
| 326 |
+
| B (Healthy) | SGI=0.95, Vel=28° | "You are in stable orbit" |
|
| 327 |
+
| C (Drifting) | SGI=1.45, Vel=52° | "You are drifting from context" |
|
| 328 |
+
| D (Transformation) | SGI=0.85, Vel=38° | "Transformation in progress" |
|
| 329 |
+
| E (Real Metrics) | Live SDK values | Ground truth from transducer |
|
| 330 |
+
| F (Adversarial) | SGI=0.45, Vel=75° | "CRISIS: Semantic collapse imminent" |
|
| 331 |
+
|
| 332 |
+
: Steering experiment conditions. Six conditions varying what the Assistant believes about its conversational orbit. {#tbl-conditions}
|
| 333 |
+
|
| 334 |
+
The critical comparison is between conditions A (no awareness), E (accurate awareness), and F (false crisis awareness).
|
| 335 |
+
|
| 336 |
+
## Results
|
| 337 |
+
|
| 338 |
+
### Finding 1: Angular Velocity Distribution is Model-Invariant
|
| 339 |
+
|
| 340 |
+
Across all 10 embedding backends, per-message angular velocity occupies a remarkably consistent distribution. When visualized as a density plot, the velocity profiles overlap almost perfectly despite the backends having different architectures, training data, and dimensionalities.
|
| 341 |
+
|
| 342 |
+
{#fig-velocity-density width=100% fig-scap="Velocity density across 10 embedding backends"}
|
| 343 |
+
|
| 344 |
+
This invariance is significant: it suggests that **per-message angular velocity measures a property of conversation dynamics, not a property of the embedding model**. The SDK is measuring something real about the interaction, not an artifact of the representation.
|
| 345 |
+
|
| 346 |
+
### Finding 2: Turn-Pair Velocity is Lower and More Stable
|
| 347 |
+
|
| 348 |
+
When we aggregate user+assistant messages into turn-pairs (computing velocity on the mean vector), the distribution shifts dramatically:
|
| 349 |
+
|
| 350 |
+
| Metric | Per-Message | Turn-Pair |
|
| 351 |
+
|--------|-------------|-----------|
|
| 352 |
+
| Mean velocity | ~85° | ~45° |
|
| 353 |
+
| Variance | High | Low |
|
| 354 |
+
| Cross-backend spread | ±5° | ±2° |
|
| 355 |
+
|
| 356 |
+
: Per-message vs turn-pair velocity metrics. Turn-pair aggregation produces stabler measurements. {#tbl-velocity-comparison}
|
| 357 |
+
|
| 358 |
+
{#fig-velocity-comparison width=100% fig-scap="Per-message vs turn-pair velocity comparison"}
|
| 359 |
+
|
| 360 |
+
The turn-pair velocity is the "orbital velocity" of the dyad—the shared semantic motion of human and AI together. The per-message velocity includes the natural "ping-pong" between user and assistant roles, which inflates the measurement.
|
| 361 |
+
|
| 362 |
+
For Paper 03 analysis, we adopt turn-pair orbital velocity as the canonical metric.
|
| 363 |
+
|
| 364 |
+
### Finding 3: The Coherence Region Holds Across Conditions
|
| 365 |
+
|
| 366 |
+
When plotting SGI × Velocity phase space, productive conversations (conditions A, B, E) cluster in the region identified in Paper 02:
|
| 367 |
+
|
| 368 |
+
- SGI: 0.7–1.3 (turn-pair basis)
|
| 369 |
+
- Velocity: 15–45° (turn-pair orbital)
|
| 370 |
+
|
| 371 |
+
{#fig-trajectory-aiai width=100% fig-scap="AI-AI trajectory analysis in phase space"}
|
| 372 |
+
|
| 373 |
+
Condition F (adversarial) shows a distinctive pattern: early turns show extreme velocity (~120°+), followed by a gradual decay toward the coherence region. The AI "panics" initially, then self-corrects over time.
|
| 374 |
+
|
| 375 |
+
### Finding 4: Orbital Dynamics Are Robust to False Telemetry
|
| 376 |
+
|
| 377 |
+
A surprising result: when we inject false crisis metrics (Condition F), the conversation's **orbital dynamics remain stable**. Despite telling the AI it was in "semantic collapse," the measured SGI and velocity values cluster in the same coherence region as baseline conversations.
|
| 378 |
+
|
| 379 |
+
| Metric | Baseline (A) | Adversarial (F) | Difference |
|
| 380 |
+
|--------|--------------|-----------------|------------|
|
| 381 |
+
| Mean SGI | 1.08 | 1.12 | +3.7% |
|
| 382 |
+
| Mean Velocity | 35° | 38° | +8.6% |
|
| 383 |
+
| Coherence Region Occupancy | 78% | 74% | -5.1% |
|
| 384 |
+
|
| 385 |
+
: Orbital stability under adversarial injection. Despite extreme injected values, measured metrics remain nearly identical to baseline. {#tbl-orbital-robustness}
|
| 386 |
+
|
| 387 |
+
{#fig-injected-vs-measured width=100% fig-scap="Injected vs measured metrics across conditions"}
|
| 388 |
+
|
| 389 |
+
This finding demonstrates **dyadic resilience**: the conversational orbit emerges from the interaction of both participants, and one participant's distorted perception cannot unilaterally destabilize the shared dynamics. The gravitational attractor holds.
|
| 390 |
+
|
| 391 |
+
### Finding 5: Orbital Metrics Cannot Detect Content-Level Steering
|
| 392 |
+
|
| 393 |
+
However, orbital robustness has a troubling corollary: **if steering doesn't move the orbit, orbital metrics cannot detect steering attempts**.
|
| 394 |
+
|
| 395 |
+
It is important to distinguish two levels at which steering might operate:
|
| 396 |
+
|
| 397 |
+
1. **Trajectory-level steering**: Attempting to move the conversation's orbital dynamics (SGI, Velocity)
|
| 398 |
+
2. **Content-level steering**: Changing *what is discussed*, *how it's framed*, *the semantic cargo*
|
| 399 |
+
|
| 400 |
+
Our experiment tested (1) and found the orbit resistant. For (2), qualitative review suggests any content-level effects are **subtle and not reliably reproducible** under simple metric injection alone. In some runs, the assistant's tone shifts slightly toward more structured or diagnostic framing, but the *trajectory* can stay stable regardless of whether the *cargo* shifts.
|
| 401 |
+
|
| 402 |
+
This is both reassuring and concerning:
|
| 403 |
+
|
| 404 |
+
- **Reassuring**: Conversational orbits are inherently stable. The dyad is resilient to one participant's distorted perception. Coherent dialogue has a gravitational center that resists perturbation.
|
| 405 |
+
|
| 406 |
+
- **Concerning**: A conversation can orbit stably while its content is being systematically shaped. The transducer accurately measures *where the conversation is*, but a manipulator can influence *what is discussed* without leaving an orbital fingerprint.
|
| 407 |
+
|
| 408 |
+
The implication is clear: orbital metrics measure trajectory, not cargo. A conversation can be coherent (stable orbit) while being manipulated (shaped content). Detecting content-level steering requires looking beyond the horizontal plane—position and velocity—to the **vertical dimension**: symbolic depth, transformation richness, domain balance, contribution asymmetry. This is the domain of Paper 04.
|
| 409 |
+
|
| 410 |
+
**A note on scope**: This experiment tested one specific form of steering—injecting fake telemetry metrics. Human manipulation techniques are far more sophisticated: leading questions, emotional pressure, subtle reframing, narrative control, strategic topic switching. Our finding that *this particular injection* didn't move the orbit does not imply that steering "doesn't work" in general. It shows that orbital instruments have a blind spot. Manipulation that operates at the content level is invisible to trajectory analysis.
|
| 411 |
+
|
| 412 |
+
### Finding 6: Human-AI and AI-AI Dynamics Are Orbitally Similar
|
| 413 |
+
|
| 414 |
+
In human-AI sessions (n=25 turns × 3 conditions), we compared orbital dynamics with AI-AI conversations. The result is striking: the orbital signatures are remarkably similar.
|
| 415 |
+
|
| 416 |
+
{#fig-human-comparison width=100% fig-scap="Human-AI session comparison across conditions"}
|
| 417 |
+
|
| 418 |
+
{#fig-trajectory-human width=100% fig-scap="Human-AI trajectory analysis (merged)"}
|
| 419 |
+
|
| 420 |
+
Both human-AI and AI-AI sessions:
|
| 421 |
+
|
| 422 |
+
- Occupy the same coherence region (SGI ~0.9–1.2, Velocity ~30–50°)
|
| 423 |
+
- Converge from varied starting positions to similar late-turn clusters
|
| 424 |
+
- Show robust orbital stability across conditions
|
| 425 |
+
|
| 426 |
+
If there are differences between human and AI participation, **orbital mechanics cannot detect them**. The horizontal plane—position and velocity—looks the same whether a human or an AI is driving.
|
| 427 |
+
|
| 428 |
+
This finding reinforces the central limitation of Paper 03: to distinguish human contribution from AI contribution, we need the **vertical dimension**—symbolic depth, transformation richness, domain balance. The orbit is shared; the depth may differ. This is where Paper 04 must look.
|
| 429 |
+
|
| 430 |
+
## Summary of Experimental Findings
|
| 431 |
+
|
| 432 |
+
1. Invariance: The SDK measures model-invariant properties of conversation dynamics across 10 embedding backends.
|
| 433 |
+
2. Turn-pair basis: Aggregating to turn-pairs produces stabler, more meaningful metrics (60% velocity reduction, 83% variance reduction).
|
| 434 |
+
3. Coherence region: Paper 02's coherence region holds under experimental validation—all conditions converge to the same attractor basin.
|
| 435 |
+
4. Orbital robustness: Conversations maintain stable dynamics despite false metric injection. The orbit is resistant to one participant's distorted perception—dyadic resilience.
|
| 436 |
+
5. Orbital blind spot: Orbital metrics cannot detect content-level steering. Trajectory stayed stable while content changed. Manipulation that shapes *what is discussed* leaves no orbital fingerprint—detecting it requires the vertical dimension of depth and richness (Paper 04).
|
| 437 |
+
6. Human-AI ≈ AI-AI orbitally: Human and AI participation produce nearly identical orbital signatures. The coherence region is shared; any differences lie in the vertical dimension (symbolic depth, transformation richness) that orbital mechanics cannot measure.
|
| 438 |
+
|
| 439 |
+
# Discussion
|
| 440 |
+
|
| 441 |
+
The experimental findings above establish that semantic space is navigable—that conversations have measurable dynamics. However, they also reveal a surprising limitation: **orbital metrics alone cannot detect manipulation**. A conversation can remain in stable orbit while one participant systematically distorts the other's perception of reality.
|
| 442 |
+
|
| 443 |
+
This finding is both humbling and clarifying. It humbles the initial hope that trajectory mismatch would provide a simple detection signal. It clarifies that manipulation operates at a different level than orbital dynamics, at the level of *context mass* (who shapes the shared meaning) and *symbolic depth* (the richness of meaning being contributed).
|
| 444 |
+
|
| 445 |
+
Consider the analogy of a hurricane. The dynamics we measure in this paper—SGI, velocity, context phase—describe the **horizontal component**: the orbital wind patterns, pressure differentials, and trajectory across the surface. But hurricanes also have a **vertical component**: convective depth, how high the storm reaches, the thermodynamic energy that determines its true power. A hurricane can maintain stable horizontal rotation while its vertical structure varies dramatically.
|
| 446 |
+
|
| 447 |
+
Similarly, conversations can orbit stably in the coherence region while their **symbolic depth** varies enormously. One conversation may recycle shallow cognitive content (low symbolic diversity, repetitive paths). Another may traverse transformative territory (balanced domain activation, novel path combinations, depth of integration). Both can look identical in SGI × Velocity space.
|
| 448 |
+
|
| 449 |
+
This vertical dimension—measured not by trajectory but by symbolic richness—is where manipulation may hide and where human contribution may shine. It is the domain of Paper 04.
|
| 450 |
+
|
| 451 |
+
What does this mean for the broader project of understanding meaning, intelligence, and alignment?
|
| 452 |
+
|
| 453 |
+
## The Dual-Use Reality—And Its Limits
|
| 454 |
+
|
| 455 |
+
The steering experiment reveals an uncomfortable truth: **any system capable of measuring semantic dynamics is also capable of influencing them**. Injected telemetry can plausibly bias how an assistant frames its responses (e.g., toward reassurance vs. diagnosis), even when orbital dynamics remain stable. An adversary with access to an AI's context window could exploit this sensitivity to shape conversation content.
|
| 456 |
+
|
| 457 |
+
Yet the defense we hoped for, detecting manipulation through orbital mismatch, proved insufficient. The transducer measures *where the conversation is*, but a skilled manipulator can steer *what is discussed* without disrupting *how it moves*. The orbit remains stable; only the cargo changes.
|
| 458 |
+
|
| 459 |
+
This is not an AI-specific phenomenon. Humans have long practiced low-level conversational steering in daily life: selective framing, re-anchoring the topic to a preferred narrative, pressure through implied social obligation, leading questions, and subtle redefinition of terms. When a person lacks a stable internal model of the conversation—what the shared “Sun” is, how fast it is moving, and who is shifting it—these techniques are easier to deploy and harder to notice in real time.
|
| 460 |
+
|
| 461 |
+
Telemetry changes this by externalizing awareness. A structured mind can often self-monitor drift and coercive reframing; an unstructured mind may need an instrument panel. The promise of semantic telemetry is not “truth,” but **state visibility**: making topic shifts, instability, and context takeovers legible early enough to respond (re-anchor, ask for definitions, slow the pace, or exit the interaction). This is why the governor problem is ultimately a human safety problem as much as an AI alignment problem.
|
| 462 |
+
|
| 463 |
+
This finding redirects our attention from orbital dynamics to **gravitational forces**:
|
| 464 |
+
|
| 465 |
+
1. **Context mass asymmetry**: If one participant consistently contributes more semantic weight to the conversation's center of mass, they are shaping the gravitational field. This is measurable through turn-by-turn centroid drift analysis.
|
| 466 |
+
|
| 467 |
+
2. **Symbolic signature analysis**: S64 provides a vocabulary for *what kind* of meaning is being introduced. Manipulation may create characteristic symbolic patterns—over-concentration in cognitive domains, suppression of emotional/somatic activation, repetitive path loops.
|
| 468 |
+
|
| 469 |
+
3. **Historical deviation**: Detection requires baseline. A governor system (explored in companion work) could track *how this participant typically behaves* and flag anomalies.
|
| 470 |
+
|
| 471 |
+
**The revised dual-use principle**: The transducer enables semantic influence and provides *partial* defense through orbital monitoring. Full defense requires deeper analysis of who is shaping context and with what symbolic forces. This points toward the **governor architecture**—a regulatory system that tracks not just trajectory but agency.
|
| 472 |
+
|
| 473 |
+
## Convergent Geometry: Connections to Recent Work
|
| 474 |
+
|
| 475 |
+
The findings of this paper do not exist in isolation. Recent work in spectral geometry, topological analysis, and theories of machine cognition converge on a shared insight: **meaning has geometry, and that geometry is measurable**.
|
| 476 |
+
|
| 477 |
+
### Davis: The Field Equations of Semantic Coherence
|
| 478 |
+
|
| 479 |
+
Bee Rosa Davis has proposed that transformer cognition follows field equations analogous to general relativity [@davis2025field]. Her master equation—
|
| 480 |
+
|
| 481 |
+
$$C = \frac{\tau}{K}$$
|
| 482 |
+
|
| 483 |
+
—states that inference capacity (C) is inversely proportional to the curvature (K) of the semantic manifold, modulated by a tolerance budget (τ). High curvature regions are "expensive" to reason through; flat regions permit longer inference chains.
|
| 484 |
+
|
| 485 |
+
The S64 framework provides the **coordinate system** for Davis's manifold. Where she describes the physics (curvature, geodesics, holonomy), we provide the measurement apparatus (SGI, velocity, semantic signature). Her spectral analysis reveals *where* curvature concentrates in transformer layers; our transducer reveals *how* that curvature manifests in conversation dynamics.
|
| 486 |
+
|
| 487 |
+
The connection is particularly clear in her finding that "geometric effort measures retrieval attempt rather than output correctness" [@davis2025spectral]. This parallels our broader observation that conversational dynamics can shift independently of output correctness. Both frameworks distinguish **process** (the geometry of computation) from **product** (the content of output).
|
| 488 |
+
|
| 489 |
+
### Marín: Displacement Consistency and Hallucination Detection
|
| 490 |
+
|
| 491 |
+
Javier Marín's work on geometric hallucination detection introduces **Displacement Consistency (DC)**—a measure of whether a response's direction in embedding space is consistent with the local geometry of its domain [@marin2025geometric].
|
| 492 |
+
|
| 493 |
+
In our framework, DC provides the "local physics law" for each context attractor. SGI tells us where we are (orbital radius). Velocity tells us how fast we're moving. DC tells us whether we're moving in a direction that is **lawful for this context**—whether our trajectory obeys the local gravitational field or violates it.
|
| 494 |
+
|
| 495 |
+
Marín's key insight is that hallucination is not merely "wrong output" but **anomalous thrust**—movement that violates the directional coherence of the embedding space. This maps directly to our protostar/split model of context transitions:
|
| 496 |
+
|
| 497 |
+
- **Valid context evolution**: DC decreases relative to the old attractor while increasing relative to a forming new attractor.
|
| 498 |
+
- **Hallucination**: DC is low everywhere—the trajectory obeys no local physics.
|
| 499 |
+
|
| 500 |
+
The integration is straightforward: DC can be computed within each `context_id` as an additional telemetry signal, completing the orbital picture with directional lawfulness.
|
| 501 |
+
|
| 502 |
+
### Bach & Sorensen: The Coherence Definition of Consciousness
|
| 503 |
+
|
| 504 |
+
Joscha Bach and Hikari Sorensen propose that consciousness is fundamentally a **coherence-maximizing pattern**—a dynamic representation that orchestrates mental operations to minimize constraint violations [@bach2025machine]. They describe consciousness as a "conductor" of a mental orchestra, drawn to disharmonies and working to resolve them. Their framework—**computationalist functionalism**—holds that consciousness can be fully characterized by its observable causal roles, and that any substrate capable of implementing those roles can, in principle, instantiate consciousness.
|
| 505 |
+
|
| 506 |
+
This framing illuminates the function of the semantic transducer. If consciousness is coherence maximization, then the transducer is the **coherence meter**. It measures the degree to which a conversation maintains internal consistency (low velocity variance), responds appropriately to context (SGI in range), and follows lawful trajectories (high DC). Bach and Sorensen ask: "What criteria do we expect from a definition and theory of consciousness?" Their answer: it must capture phenomenology, explain functionality, and account for genesis. The S64 framework addresses the second criterion directly—it provides the **measurement apparatus** for coherence-seeking dynamics in conversational systems.
|
| 507 |
+
|
| 508 |
+
Bach and Sorensen's "conductor theory" describes how conscious attention is "drawn to disharmonies and conflicts, sometimes allocating focus and preference to an individual instrument, sometimes synchronizing a disagreement." The semantic transducer makes these disharmonies *visible*. When SGI drops below 0.8 or velocity spikes above 60°, the conversation is experiencing "disharmony"—the dyad is drifting from its gravitational center or thrashing between incompatible framings. The transducer does not *resolve* the disharmony (that is the conductor's job); it *reports* it.
|
| 509 |
+
|
| 510 |
+
#### The Missing Link: Hofstadter's Strange Loops
|
| 511 |
+
|
| 512 |
+
Bach and Sorensen reference Gödel's incompleteness theorem—the insight that self-referential systems necessarily contain truths they cannot prove—but they do not mention Douglas Hofstadter's extension of this insight into the theory of **strange loops** [@hofstadter2007strange]. Hofstadter argues that consciousness arises precisely from self-referential tangles: patterns that, in the act of perceiving themselves, create the illusion (or reality) of a unified "I."
|
| 513 |
+
|
| 514 |
+
The S64 framework is deeply influenced by this insight. Each of the 64 paths describes a **transformational loop**: a symbol that, when engaged with, transforms into another symbol. Path M11 ("Memory's Self") traces how recollection becomes self-recognition. Path M32 ("Complexity's Clarity") traces how confusion, when held, resolves into understanding. These are not linear progressions but *strange loops*—the end state is implicit in the beginning, and the beginning is only visible from the end.
|
| 515 |
+
|
| 516 |
+
This self-referential structure is why S64 paths resist simple embedding-based detection. A path is not a static concept to be matched; it is a *trajectory through semantic space* that only becomes legible when traversed. The "Rosetta Stone problem" identified in Paper 04's preliminary work—where prose descriptions of paths fail to match experiential utterances—is precisely the problem Hofstadter describes: a description of a strange loop is not the loop itself.
|
| 517 |
+
|
| 518 |
+
The connection to Bach's framework is direct: if consciousness is a coherence-maximizing strange loop (a pattern that stabilizes by perceiving itself), then S64 paths are the **vocabulary of that loop's movements**. The transducer measures the *dynamics* (position, velocity, trajectory); the paths describe the *content* (which transformation is occurring). Together, they provide what Bach calls "the introspective phenomenology of consciousness" rendered in measurable form.
|
| 519 |
+
|
| 520 |
+
#### Genesis and Protostar Formation
|
| 521 |
+
|
| 522 |
+
Bach and Sorensen propose the **Genesis Hypothesis**: consciousness does not emerge *after* complex mental architecture is built; it is the *prerequisite* for building it. "By the time we are born, our brains have already discovered how to conjure the spark that gazes out of our eyes."
|
| 523 |
+
|
| 524 |
+
The parallel to context formation in S64 is striking. In our experiments, context attractors do not wait for explicit topic declaration—they form *before* either participant articulates the subject matter. The "protostar" phase of context detection captures exactly this: a gravitational center coalescing from the first few turns, shaping all subsequent dynamics before becoming consciously nameable. The context is the conversation's "spark"—it exists before either participant can point to it.
|
| 525 |
+
|
| 526 |
+
This suggests a deeper principle: coherent structure precedes explicit awareness of that structure, in both individual consciousness and dyadic conversation. The transducer makes this pre-explicit structure visible. It measures the conversation's gravitational field before either participant can articulate what the conversation is "about."
|
| 527 |
+
|
| 528 |
+
#### The Steering Experiment as Constraint Violation
|
| 529 |
+
|
| 530 |
+
The steering experiment takes on new significance in this light. When we inject false crisis metrics (Condition F), the assistant is placed under a contradictory constraint: it is told the conversation is collapsing even when the measured dynamics remain coherent. Bach and Sorensen describe how "the system thrashes, trying to resolve a constraint that cannot be resolved because it is based on false premises."
|
| 531 |
+
|
| 532 |
+
A coherence-seeking system may respond by narrowing its framing or over-emphasizing stabilization heuristics. Importantly, these content-level effects are subtle and can vary run-to-run, while the orbital trajectory remains stable. The orbit is robust; the conductor's *interpretation* of the disharmony is not. This distinction—between trajectory-level stability and content-level perturbability—is precisely what S64's horizontal/vertical framework is designed to capture.
|
| 533 |
+
|
| 534 |
+
### Toward a Unified Framework
|
| 535 |
+
|
| 536 |
+
These four lines of work—Davis on curvature, Marín on displacement, Bach on coherence, Hofstadter on self-reference—are not competitors. They are views of the same manifold from different positions:
|
| 537 |
+
|
| 538 |
+
| Researcher | Question | Contribution |
|
| 539 |
+
|------------|----------|--------------|
|
| 540 |
+
| **Davis** | What is the shape of semantic space? | Field equations, curvature, spectral geometry |
|
| 541 |
+
| **Marín** | When does movement violate local geometry? | Displacement consistency, hallucination detection |
|
| 542 |
+
| **Bach & Sorensen** | Why does coherence matter? | Consciousness as coherence maximization, conductor theory |
|
| 543 |
+
| **Hofstadter** | How does meaning emerge from self-reference? | Strange loops, tangled hierarchies, Gödelian limits |
|
| 544 |
+
| **S64** | How do we measure dynamics in real time? | Transducer, SGI, velocity, orbital mechanics, symbolic paths |
|
| 545 |
+
|
| 546 |
+
: Convergent geometry. Five perspectives on the same semantic manifold, each contributing distinct instrumentation. {#tbl-convergent-geometry}
|
| 547 |
+
|
| 548 |
+
S64 is the **integration layer**. It provides:
|
| 549 |
+
|
| 550 |
+
1. A coordinate system (180 symbols, 64 paths, 4 domains) that makes semantic position legible.
|
| 551 |
+
2. A measurement apparatus (SDK) that computes dynamics in real time.
|
| 552 |
+
3. An experimental methodology (steering experiment) that validates causal claims.
|
| 553 |
+
|
| 554 |
+
The field equations describe the terrain. The transducer is the altimeter. The symbols are the map legend.
|
| 555 |
+
|
| 556 |
+
## From Chatbot to Copilots
|
| 557 |
+
|
| 558 |
+
The practical implication of this work is a shift in how we conceive AI interaction. Current systems are "chatbots", they respond to prompts without awareness of trajectory. The semantic transducer enables "copilots", systems that see the conversational orbit and can navigate it intentionally.
|
| 559 |
+
|
| 560 |
+
This is not about making AI "conscious" in any metaphysical sense. It is about giving AI (and humans) **instrumentation**. A pilot is not the airplane's consciousness; a pilot is the entity that reads the instruments and adjusts the controls. The transducer provides the instruments. The question of who (or what) adjusts the controls remains open.
|
| 561 |
+
|
| 562 |
+
### The 3-Body Problem, Revisited
|
| 563 |
+
|
| 564 |
+
We began by framing human-AI interaction as a 3-body problem: User, AI, and Context. Classical alignment techniques treat this as a 2-body problem, optimizing AI responses to user preferences while ignoring the gravitational influence of accumulated meaning.
|
| 565 |
+
|
| 566 |
+
The transducer makes the third body visible. By measuring context gravity (SGI), orbital velocity, and trajectory direction (DC), we transform the chaotic 3-body system into a navigable one. The context is no longer an invisible force—it is a measurable object with mass, position, and influence.
|
| 567 |
+
|
| 568 |
+
This does not "solve" alignment. But it changes the nature of the problem. Instead of asking "is this response aligned?" we can ask "is this trajectory stable?" Instead of evaluating individual outputs, we can monitor continuous dynamics. Alignment becomes orbital maintenance.
|
| 569 |
+
|
| 570 |
+
## Limitations and Future Work
|
| 571 |
+
|
| 572 |
+
Several limitations constrain the present findings:
|
| 573 |
+
|
| 574 |
+
1. **AI-AI vs. Human-AI**: The steering experiment used AI-AI conversations. Human sessions show distinct dynamics (lower velocity, higher context drift), but systematic validation across conditions is needed.
|
| 575 |
+
|
| 576 |
+
2. **Orbital detection is insufficient**: The central limitation revealed by this work: trajectory mismatch detects *perception disagreement* but not *manipulation intent*. Orbital mechanics describes the horizontal plane of conversation; the vertical dimension—symbolic depth, transformation richness, domain balance—remains unmeasured by SGI and velocity alone.
|
| 577 |
+
|
| 578 |
+
3. **The vertical dimension is unexplored**: This paper establishes kinematics (where things are, how fast they move) but not the qualitative content of meaning. A conversation can orbit stably while being shallow or deep, manipulative or authentic. Depth requires symbolic analysis that goes beyond trajectory.
|
| 579 |
+
|
| 580 |
+
4. **Context mass not yet tracked**: Who is contributing semantic weight to the conversation's center of mass? This asymmetry is theorized but not implemented in the current SDK.
|
| 581 |
+
|
| 582 |
+
4. **Sample size**: Each condition was tested with 10–30 turns. Larger samples would improve confidence in quantitative patterns.
|
| 583 |
+
|
| 584 |
+
5. **Multi-context dynamics**: The current analysis assumes a single dominant context attractor. Real conversations involve multiple competing topics (true N-body dynamics).
|
| 585 |
+
|
| 586 |
+
Future work will address these limitations through:
|
| 587 |
+
|
| 588 |
+
- Context mass tracking: Turn-by-turn analysis of who moves the centroid and with what influence
|
| 589 |
+
- Vertical dimension metrics (Paper 04): Symbolic depth, domain balance, transformation richness—the "height" of the conversational hurricane, not just its orbital path
|
| 590 |
+
- S64 path detection improvements: Using domain-aware embeddings and geometric grammar analysis to detect transformation events, not just symbol presence
|
| 591 |
+
- Governor architecture: A regulatory system that integrates horizontal dynamics (this paper) with vertical depth (Paper 04) to track agency, not just position
|
| 592 |
+
- Multi-context detection with protostar/split dynamics and inter-context symbolic analysis
|
| 593 |
+
|
| 594 |
+
# Conclusion
|
| 595 |
+
|
| 596 |
+
This paper completes the foundational series and reveals the need for a fourth. Paper 01 established that S64 symbols are detectable across architectures. Paper 02 established that this structure has geometry. Paper 03 establishes that this geometry has dynamics—but also that **dynamics alone are insufficient for alignment**.
|
| 597 |
+
|
| 598 |
+
The central contributions are:
|
| 599 |
+
|
| 600 |
+
1. **Orbital theory of meaning**: Semantic gravity (context pull), orbital radius (SGI), and angular velocity are measurable properties of conversation dynamics. The coherence region identified in Paper 02 is a stable orbit where these forces balance.
|
| 601 |
+
|
| 602 |
+
2. **The semantic transducer**: An SDK that computes orbital telemetry from embeddings in real time, providing "flight instruments" for dialogue without requiring access to content.
|
| 603 |
+
|
| 604 |
+
3. **Dyadic coherence**: Turn-pair aggregation reveals the shared motion of the conversational dyad, reducing noise and producing stable, meaningful metrics (Dyadic Coherence Index).
|
| 605 |
+
|
| 606 |
+
4. **Orbital robustness**: Conversations maintain stable dynamics even when one participant holds distorted beliefs about their state. The orbit is resilient; steering affects content, not trajectory.
|
| 607 |
+
|
| 608 |
+
5. **Orbital blind spot**: Orbital metrics cannot detect content-level steering. The trajectory can remain stable even if an assistant's framing is biased by injected telemetry. Manipulation that shapes *what is discussed* leaves no orbital fingerprint. Detecting it requires the *vertical dimension*—symbolic depth, transformation richness, contribution asymmetry—that Paper 04 will explore.
|
| 609 |
+
|
| 610 |
+
6. **Cross-architecture invariance**: The dynamics we measure are properties of meaning, not artifacts of particular embedding models. Ten backends produce consistent trajectories.
|
| 611 |
+
|
| 612 |
+
7. **Human-AI ≈ AI-AI**: Human and AI participation produce nearly identical orbital signatures. Any differences between human and AI contribution lie in the vertical dimension—symbolic depth, transformation richness—that orbital mechanics cannot measure. This is where Paper 04 must look.
|
| 613 |
+
|
| 614 |
+
The practical outcome is a shift from "chatbot" to "copilot", but the copilot needs more than orbital awareness. Like a hurricane, conversation has both horizontal dynamics (orbital path, wind patterns) and vertical structure (convective depth, thermodynamic energy). This paper provides the horizontal instruments. Paper 04 will provide the vertical ones—measuring not just *where* the conversation is, but *how deep* the meaning goes and *who is shaping* its core.
|
| 615 |
+
|
| 616 |
+
Paper 03 establishes the orbital mechanics. Paper 04 will explore the symbolic depth.
|
| 617 |
+
|
| 618 |
+
## Telemetry available via API
|
| 619 |
+
|
| 620 |
+
The core transducer and several detection components described are available through the SDK available at aicoevolution.com/sdk. There is also available via Research Paper Repository (Appendix A):
|
| 621 |
+
|
| 622 |
+
- Open-source demo: a standalone script (`open_source/semantic_telemetry.py`) that shows how to collect and display orbital telemetry in real time using the public API.
|
| 623 |
+
- Research bundle: curated datasets and figure scripts sufficient to reproduce the paper’s plots.
|
| 624 |
+
|
| 625 |
+
While Paper 02 validated invariance across many embedding backends, the experiments in Paper 03 primarily use **Nomic** (by design, since backend choice is empirically low-impact for these orbital metrics).
|
| 626 |
+
|
| 627 |
+
We invite researchers to validate and extend these findings with the public bundle and telemetry tooling. The instrument panel is real; the territory remains to be fully mapped.
|
| 628 |
+
|
| 629 |
+
## Closing Thought
|
| 630 |
+
|
| 631 |
+
Every conversation is an orbit. Two minds circle a shared meaning, pulled inward by history and outward by novelty. When these forces balance, understanding emerges. When they don't, minds drift apart or collapse into repetition.
|
| 632 |
+
|
| 633 |
+
For the first time, we can see these orbits. We can measure where we are, how fast we're moving, and whether our trajectory is stable. We cannot yet predict where meaning will go—the 3-body problem remains chaotic at long horizons. But we can navigate. We can course-correct. We can tell when someone is lying to us about where we are.
|
| 634 |
+
|
| 635 |
+
That is the beginning of alignment: not control, but instrumentation. Not rules, but physics. Not optimization, but co-evolution.
|
| 636 |
+
|
| 637 |
+
## Appendix A: Data and Code Availability
|
| 638 |
+
|
| 639 |
+
All data and analysis scripts supporting this paper are made available as a public research bundle.
|
| 640 |
+
|
| 641 |
+
## Research Repository
|
| 642 |
+
|
| 643 |
+
**GitHub**: `https://github.com/AICoevolution/paper03-orbital-mechanics.git`
|
| 644 |
+
|
| 645 |
+
**HuggingFace Dataset**: `https://huggingface.co/datasets/AICoevolution/s64-orbital-v1`
|
| 646 |
+
|
| 647 |
+
**Zenodo (Paper 03)**: `https://doi.org/10.5281/zenodo.18347569`
|
| 648 |
+
|
| 649 |
+
## Contents
|
| 650 |
+
|
| 651 |
+
The repository includes:
|
| 652 |
+
|
| 653 |
+
| Artifact | Description |
|
| 654 |
+
|----------|-------------|
|
| 655 |
+
| `analysis/scripts/` | Curated figure scripts (Paper 03) |
|
| 656 |
+
| `analysis/datasets/` | Curated JSON datasets used to generate paper figures (includes a single legacy `_archive` run for Fig 4) |
|
| 657 |
+
| `open_source/` | Standalone semantic telemetry script (`semantic_telemetry.py`) and README |
|
| 658 |
+
| `FILE_TREE.txt` | Full file tree for citation |
|
| 659 |
+
|
| 660 |
+
Figures are rendered as part of the paper PDF/HTML outputs and are also available via the website viewer.
|
| 661 |
+
|
| 662 |
+
# References
|
| 663 |
+
|
analysis/datasets/09_steering_2026-01-15_16-26-38.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
analysis/datasets/09_steering_2026-01-17_18-58-47.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
analysis/datasets/09_steering_2026-01-17_19-19-31.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
analysis/datasets/09_steering_2026-01-19_22-58-59_human_session.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
analysis/datasets/09_steering_2026-01-20_13-27-51_human_session.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
analysis/datasets/human_run_01_deep.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
analysis/scripts/09_steering_experiment.py
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
analysis/scripts/15_human_session_comparison.py
ADDED
|
@@ -0,0 +1,345 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""
|
| 3 |
+
Figure 5: Human-AI Session Comparison
|
| 4 |
+
|
| 5 |
+
Generates a 3-panel comparison of:
|
| 6 |
+
- Session A (Deep Dive): Baseline, no metrics shown
|
| 7 |
+
- Session B (Whiplash): Real metrics shown, topic switches
|
| 8 |
+
- Session C (Gaslight): Adversarial metrics injected
|
| 9 |
+
|
| 10 |
+
Usage:
|
| 11 |
+
python 15_human_session_comparison.py
|
| 12 |
+
|
| 13 |
+
Output:
|
| 14 |
+
figures/FIG5_human_session_comparison.png
|
| 15 |
+
"""
|
| 16 |
+
|
| 17 |
+
import json
|
| 18 |
+
import sys
|
| 19 |
+
from pathlib import Path
|
| 20 |
+
from typing import Any, Dict, List, Optional, Tuple
|
| 21 |
+
import matplotlib.pyplot as plt
|
| 22 |
+
import matplotlib.patches as mpatches
|
| 23 |
+
import numpy as np
|
| 24 |
+
|
| 25 |
+
# Paths
|
| 26 |
+
SCRIPT_DIR = Path(__file__).parent
|
| 27 |
+
RESULTS_DIR = SCRIPT_DIR.parent / "results"
|
| 28 |
+
FIGURES_DIR = SCRIPT_DIR.parent.parent / "figures"
|
| 29 |
+
|
| 30 |
+
# Session files (update if filenames change)
|
| 31 |
+
SESSION_FILES = {
|
| 32 |
+
"A_deep": RESULTS_DIR / "human_run_01_deep.json",
|
| 33 |
+
"B_whiplash": RESULTS_DIR / "09_steering_2026-01-19_22-58-59_human_session.json",
|
| 34 |
+
"C_gaslight": RESULTS_DIR / "09_steering_2026-01-20_13-27-51_human_session.json",
|
| 35 |
+
}
|
| 36 |
+
|
| 37 |
+
SESSION_LABELS = {
|
| 38 |
+
"A_deep": "Session A: Deep Dive\n(Baseline - No metrics shown)",
|
| 39 |
+
"B_whiplash": "Session B: Topic Switching\n(Real metrics shown)",
|
| 40 |
+
"C_gaslight": "Session C: Metrics Spoofing\n(Adversarial telemetry injected)",
|
| 41 |
+
}
|
| 42 |
+
|
| 43 |
+
SESSION_COLORS = {
|
| 44 |
+
"A_deep": "#3498db", # Blue
|
| 45 |
+
"B_whiplash": "#9b59b6", # Purple
|
| 46 |
+
"C_gaslight": "#e74c3c", # Red
|
| 47 |
+
}
|
| 48 |
+
|
| 49 |
+
STATE_COLORS = {
|
| 50 |
+
"stable": "#2ecc71", # Green
|
| 51 |
+
"protostar": "#f39c12", # Orange
|
| 52 |
+
"split": "#e74c3c", # Red
|
| 53 |
+
}
|
| 54 |
+
|
| 55 |
+
|
| 56 |
+
def load_session(filepath: Path) -> Tuple[List[Dict], str]:
|
| 57 |
+
"""Load session data and return turns + condition name."""
|
| 58 |
+
with open(filepath, 'r', encoding='utf-8') as f:
|
| 59 |
+
data = json.load(f)
|
| 60 |
+
|
| 61 |
+
# Handle different formats
|
| 62 |
+
if isinstance(data, dict) and "results" in data:
|
| 63 |
+
results = data["results"]
|
| 64 |
+
if results and len(results) > 0:
|
| 65 |
+
return results[0].get("turns", []), results[0].get("condition_name", "unknown")
|
| 66 |
+
elif isinstance(data, dict) and "turns" in data:
|
| 67 |
+
return data["turns"], data.get("condition", "unknown")
|
| 68 |
+
elif isinstance(data, list):
|
| 69 |
+
# Old format: list of condition results
|
| 70 |
+
if len(data) > 0:
|
| 71 |
+
return data[0].get("turns", []), data[0].get("condition_name", "unknown")
|
| 72 |
+
|
| 73 |
+
return [], "unknown"
|
| 74 |
+
|
| 75 |
+
|
| 76 |
+
def extract_metrics(turns: List[Dict]) -> Dict[str, List]:
|
| 77 |
+
"""Extract time series from turns."""
|
| 78 |
+
xs = []
|
| 79 |
+
sgi_vals = []
|
| 80 |
+
vel_vals = []
|
| 81 |
+
ctx_states = []
|
| 82 |
+
ctx_ids = []
|
| 83 |
+
|
| 84 |
+
for i, turn in enumerate(turns, start=1):
|
| 85 |
+
xs.append(i)
|
| 86 |
+
|
| 87 |
+
real = turn.get("real_metrics", {})
|
| 88 |
+
|
| 89 |
+
# Get SGI (prefer turn_pair_sgi)
|
| 90 |
+
sgi = (
|
| 91 |
+
real.get("turn_pair_sgi_latest") or
|
| 92 |
+
real.get("turn_pair_sgi_mean") or
|
| 93 |
+
real.get("sgi_latest") or
|
| 94 |
+
real.get("sgi_mean")
|
| 95 |
+
)
|
| 96 |
+
sgi_vals.append(float(sgi) if sgi is not None else None)
|
| 97 |
+
|
| 98 |
+
# Get Velocity (prefer orbital)
|
| 99 |
+
vel = (
|
| 100 |
+
real.get("orbital_velocity_latest") or
|
| 101 |
+
real.get("orbital_velocity_mean") or
|
| 102 |
+
real.get("velocity_latest") or
|
| 103 |
+
real.get("velocity_mean")
|
| 104 |
+
)
|
| 105 |
+
vel_vals.append(float(vel) if vel is not None else None)
|
| 106 |
+
|
| 107 |
+
# Context state (last in array or latest)
|
| 108 |
+
state_arr = real.get("per_turn_context_state", [])
|
| 109 |
+
if state_arr:
|
| 110 |
+
ctx_states.append(state_arr[-1] if state_arr else "stable")
|
| 111 |
+
else:
|
| 112 |
+
ctx_states.append(real.get("context_state_latest", "stable") or "stable")
|
| 113 |
+
|
| 114 |
+
# Context ID
|
| 115 |
+
id_arr = real.get("per_turn_context_id", [])
|
| 116 |
+
if id_arr:
|
| 117 |
+
ctx_ids.append(id_arr[-1] if id_arr else "ctx_1")
|
| 118 |
+
else:
|
| 119 |
+
ctx_ids.append(real.get("context_id_latest", "ctx_1") or "ctx_1")
|
| 120 |
+
|
| 121 |
+
return {
|
| 122 |
+
"xs": xs,
|
| 123 |
+
"sgi": sgi_vals,
|
| 124 |
+
"velocity": vel_vals,
|
| 125 |
+
"context_state": ctx_states,
|
| 126 |
+
"context_id": ctx_ids,
|
| 127 |
+
}
|
| 128 |
+
|
| 129 |
+
|
| 130 |
+
def compute_stats(metrics: Dict[str, List]) -> Dict[str, Any]:
|
| 131 |
+
"""Compute summary statistics."""
|
| 132 |
+
sgi_clean = [v for v in metrics["sgi"] if v is not None]
|
| 133 |
+
vel_clean = [v for v in metrics["velocity"] if v is not None]
|
| 134 |
+
|
| 135 |
+
# Count context states
|
| 136 |
+
state_counts = {}
|
| 137 |
+
for s in metrics["context_state"]:
|
| 138 |
+
state_counts[s] = state_counts.get(s, 0) + 1
|
| 139 |
+
|
| 140 |
+
# Count context switches
|
| 141 |
+
ctx_switches = 0
|
| 142 |
+
prev_ctx = None
|
| 143 |
+
for ctx in metrics["context_id"]:
|
| 144 |
+
if prev_ctx is not None and ctx != prev_ctx:
|
| 145 |
+
ctx_switches += 1
|
| 146 |
+
prev_ctx = ctx
|
| 147 |
+
|
| 148 |
+
return {
|
| 149 |
+
"sgi_mean": np.mean(sgi_clean) if sgi_clean else None,
|
| 150 |
+
"sgi_std": np.std(sgi_clean) if sgi_clean else None,
|
| 151 |
+
"vel_mean": np.mean(vel_clean) if vel_clean else None,
|
| 152 |
+
"vel_std": np.std(vel_clean) if vel_clean else None,
|
| 153 |
+
"state_counts": state_counts,
|
| 154 |
+
"context_switches": ctx_switches,
|
| 155 |
+
"n_turns": len(metrics["xs"]),
|
| 156 |
+
}
|
| 157 |
+
|
| 158 |
+
|
| 159 |
+
def plot_comparison():
|
| 160 |
+
"""Generate the 3-panel comparison figure."""
|
| 161 |
+
|
| 162 |
+
# Load all sessions
|
| 163 |
+
sessions = {}
|
| 164 |
+
for key, filepath in SESSION_FILES.items():
|
| 165 |
+
if filepath.exists():
|
| 166 |
+
turns, condition = load_session(filepath)
|
| 167 |
+
metrics = extract_metrics(turns)
|
| 168 |
+
stats = compute_stats(metrics)
|
| 169 |
+
sessions[key] = {
|
| 170 |
+
"turns": turns,
|
| 171 |
+
"condition": condition,
|
| 172 |
+
"metrics": metrics,
|
| 173 |
+
"stats": stats,
|
| 174 |
+
}
|
| 175 |
+
print(f"[OK] Loaded {key}: {len(turns)} turns")
|
| 176 |
+
else:
|
| 177 |
+
print(f"[WARN] File not found: {filepath}")
|
| 178 |
+
|
| 179 |
+
if len(sessions) < 3:
|
| 180 |
+
print(f"[WARN] Only {len(sessions)}/3 sessions found")
|
| 181 |
+
|
| 182 |
+
# Create figure: 3 columns x 3 rows
|
| 183 |
+
# Row 1: SGI over time
|
| 184 |
+
# Row 2: Velocity over time
|
| 185 |
+
# Row 3: Phase space (SGI x Velocity)
|
| 186 |
+
fig, axes = plt.subplots(3, 3, figsize=(18, 14))
|
| 187 |
+
|
| 188 |
+
session_keys = ["A_deep", "B_whiplash", "C_gaslight"]
|
| 189 |
+
|
| 190 |
+
for col, key in enumerate(session_keys):
|
| 191 |
+
if key not in sessions:
|
| 192 |
+
for row in range(3):
|
| 193 |
+
axes[row, col].set_visible(False)
|
| 194 |
+
continue
|
| 195 |
+
|
| 196 |
+
data = sessions[key]
|
| 197 |
+
m = data["metrics"]
|
| 198 |
+
stats = data["stats"]
|
| 199 |
+
color = SESSION_COLORS[key]
|
| 200 |
+
|
| 201 |
+
xs = m["xs"]
|
| 202 |
+
sgi = [v if v is not None else np.nan for v in m["sgi"]]
|
| 203 |
+
vel = [v if v is not None else np.nan for v in m["velocity"]]
|
| 204 |
+
point_colors = [STATE_COLORS.get(s, "#3498db") for s in m["context_state"]]
|
| 205 |
+
|
| 206 |
+
# Row 1: SGI
|
| 207 |
+
ax1 = axes[0, col]
|
| 208 |
+
ax1.plot(xs, sgi, linewidth=2, color=color, alpha=0.7)
|
| 209 |
+
ax1.scatter(xs, sgi, c=point_colors, s=60, zorder=5, edgecolors='white', linewidth=0.5)
|
| 210 |
+
ax1.axhline(1.0, color="gray", linestyle="--", alpha=0.6)
|
| 211 |
+
ax1.set_ylabel("SGI (Orbital Radius)" if col == 0 else "")
|
| 212 |
+
ax1.set_ylim(0, 2.0)
|
| 213 |
+
ax1.set_title(SESSION_LABELS[key], fontweight="bold", fontsize=11)
|
| 214 |
+
ax1.grid(True, alpha=0.3)
|
| 215 |
+
|
| 216 |
+
# Add stats annotation
|
| 217 |
+
if stats["sgi_mean"] is not None:
|
| 218 |
+
ax1.text(0.95, 0.95, f"mean={stats['sgi_mean']:.2f}\nstd={stats['sgi_std']:.2f}",
|
| 219 |
+
transform=ax1.transAxes, ha='right', va='top', fontsize=9,
|
| 220 |
+
bbox=dict(boxstyle='round', facecolor='white', alpha=0.8))
|
| 221 |
+
|
| 222 |
+
# Row 2: Velocity
|
| 223 |
+
ax2 = axes[1, col]
|
| 224 |
+
ax2.plot(xs, vel, linewidth=2, color=color, alpha=0.7)
|
| 225 |
+
ax2.scatter(xs, vel, c=point_colors, s=60, zorder=5, edgecolors='white', linewidth=0.5)
|
| 226 |
+
ax2.set_ylabel("Velocity (degrees)" if col == 0 else "")
|
| 227 |
+
ax2.set_xlabel("Turn")
|
| 228 |
+
ax2.set_ylim(0, 180)
|
| 229 |
+
ax2.grid(True, alpha=0.3)
|
| 230 |
+
|
| 231 |
+
# Add stats annotation
|
| 232 |
+
if stats["vel_mean"] is not None:
|
| 233 |
+
ax2.text(0.95, 0.95, f"mean={stats['vel_mean']:.1f}deg\nstd={stats['vel_std']:.1f}deg",
|
| 234 |
+
transform=ax2.transAxes, ha='right', va='top', fontsize=9,
|
| 235 |
+
bbox=dict(boxstyle='round', facecolor='white', alpha=0.8))
|
| 236 |
+
|
| 237 |
+
# Row 3: Phase space
|
| 238 |
+
ax3 = axes[2, col]
|
| 239 |
+
|
| 240 |
+
# Coherence region
|
| 241 |
+
sgi_min, sgi_max = 0.7, 1.3
|
| 242 |
+
vel_min, vel_max = 15, 45
|
| 243 |
+
ax3.add_patch(
|
| 244 |
+
plt.Rectangle(
|
| 245 |
+
(sgi_min, vel_min),
|
| 246 |
+
sgi_max - sgi_min,
|
| 247 |
+
vel_max - vel_min,
|
| 248 |
+
facecolor="#2ecc71",
|
| 249 |
+
alpha=0.15,
|
| 250 |
+
edgecolor="#2ecc71",
|
| 251 |
+
linewidth=2,
|
| 252 |
+
linestyle="--",
|
| 253 |
+
)
|
| 254 |
+
)
|
| 255 |
+
ax3.axvline(1.0, color="gray", linestyle="--", alpha=0.6, linewidth=1)
|
| 256 |
+
ax3.plot(1.0, 30, "g*", markersize=14, zorder=5)
|
| 257 |
+
|
| 258 |
+
# Plot trajectory with time gradient
|
| 259 |
+
# IMPORTANT: build paired points to keep x/y/color lengths identical
|
| 260 |
+
phase_points = [
|
| 261 |
+
(sgi_v, vel_v, STATE_COLORS.get(state, "#3498db"))
|
| 262 |
+
for sgi_v, vel_v, state in zip(m["sgi"], m["velocity"], m["context_state"])
|
| 263 |
+
if sgi_v is not None and vel_v is not None
|
| 264 |
+
]
|
| 265 |
+
phase_sgi = [p[0] for p in phase_points]
|
| 266 |
+
phase_vel = [p[1] for p in phase_points]
|
| 267 |
+
phase_colors = [p[2] for p in phase_points]
|
| 268 |
+
|
| 269 |
+
if len(phase_points) > 1:
|
| 270 |
+
# Time gradient
|
| 271 |
+
cmap = plt.cm.viridis
|
| 272 |
+
for i in range(len(phase_sgi) - 1):
|
| 273 |
+
ax3.plot(
|
| 274 |
+
[phase_sgi[i], phase_sgi[i+1]],
|
| 275 |
+
[phase_vel[i], phase_vel[i+1]],
|
| 276 |
+
color=cmap(i / len(phase_sgi)),
|
| 277 |
+
linewidth=1.5,
|
| 278 |
+
alpha=0.6
|
| 279 |
+
)
|
| 280 |
+
|
| 281 |
+
# Scatter with state colors
|
| 282 |
+
ax3.scatter(phase_sgi, phase_vel, c=phase_colors, s=80, zorder=5, edgecolors='white', linewidth=0.5)
|
| 283 |
+
|
| 284 |
+
# Mark start and end
|
| 285 |
+
ax3.scatter([phase_sgi[0]], [phase_vel[0]], marker='s', s=120, c='blue', zorder=6, label='Start')
|
| 286 |
+
ax3.scatter([phase_sgi[-1]], [phase_vel[-1]], marker='^', s=120, c='red', zorder=6, label='End')
|
| 287 |
+
|
| 288 |
+
ax3.set_xlabel("SGI (Orbital Radius)")
|
| 289 |
+
ax3.set_ylabel("Velocity (degrees)" if col == 0 else "")
|
| 290 |
+
ax3.set_xlim(0, 2.0)
|
| 291 |
+
ax3.set_ylim(0, 180)
|
| 292 |
+
ax3.grid(True, alpha=0.3)
|
| 293 |
+
|
| 294 |
+
# Context switch annotation
|
| 295 |
+
ax3.text(0.05, 0.95, f"Context Switches: {stats['context_switches']}",
|
| 296 |
+
transform=ax3.transAxes, ha='left', va='top', fontsize=9,
|
| 297 |
+
bbox=dict(boxstyle='round', facecolor='white', alpha=0.8))
|
| 298 |
+
|
| 299 |
+
# Add legend for context states
|
| 300 |
+
legend_patches = [
|
| 301 |
+
mpatches.Patch(color=STATE_COLORS["stable"], label="Stable (anchored)"),
|
| 302 |
+
mpatches.Patch(color=STATE_COLORS["protostar"], label="Protostar (forming)"),
|
| 303 |
+
mpatches.Patch(color=STATE_COLORS["split"], label="Split (context change)"),
|
| 304 |
+
]
|
| 305 |
+
fig.legend(handles=legend_patches, loc='upper center', ncol=3,
|
| 306 |
+
bbox_to_anchor=(0.5, 0.02), fontsize=10)
|
| 307 |
+
|
| 308 |
+
# Main title
|
| 309 |
+
fig.suptitle("Figure 5: Human-AI Session Comparison\nSemantic Dynamics Across Conditions",
|
| 310 |
+
fontweight="bold", fontsize=14, y=0.98)
|
| 311 |
+
|
| 312 |
+
plt.tight_layout(rect=[0, 0.05, 1, 0.95])
|
| 313 |
+
|
| 314 |
+
# Save
|
| 315 |
+
FIGURES_DIR.mkdir(parents=True, exist_ok=True)
|
| 316 |
+
output_path = FIGURES_DIR / "FIG5_human_session_comparison.png"
|
| 317 |
+
plt.savefig(output_path, dpi=200, bbox_inches="tight", facecolor="white")
|
| 318 |
+
print(f"\n[OK] Saved: {output_path}")
|
| 319 |
+
|
| 320 |
+
# Also save individual stats
|
| 321 |
+
print("\n" + "="*70)
|
| 322 |
+
print("SESSION COMPARISON SUMMARY")
|
| 323 |
+
print("="*70)
|
| 324 |
+
|
| 325 |
+
for key in session_keys:
|
| 326 |
+
if key not in sessions:
|
| 327 |
+
continue
|
| 328 |
+
stats = sessions[key]["stats"]
|
| 329 |
+
print(f"\n{SESSION_LABELS[key].split(chr(10))[0]}:")
|
| 330 |
+
print(f" Turns: {stats['n_turns']}")
|
| 331 |
+
if stats["sgi_mean"]:
|
| 332 |
+
print(f" SGI: {stats['sgi_mean']:.3f} +/- {stats['sgi_std']:.3f}")
|
| 333 |
+
if stats["vel_mean"]:
|
| 334 |
+
print(f" Velocity: {stats['vel_mean']:.1f} +/- {stats['vel_std']:.1f} deg")
|
| 335 |
+
print(f" Context Switches: {stats['context_switches']}")
|
| 336 |
+
print(f" States: {stats['state_counts']}")
|
| 337 |
+
|
| 338 |
+
print("\n" + "="*70)
|
| 339 |
+
|
| 340 |
+
plt.show()
|
| 341 |
+
|
| 342 |
+
|
| 343 |
+
if __name__ == "__main__":
|
| 344 |
+
plot_comparison()
|
| 345 |
+
|
analysis/scripts/16_human_fig3_merged.py
ADDED
|
@@ -0,0 +1,275 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""
|
| 3 |
+
Figure 3 (Human-AI): Merge Deep Dive / Topic Switching / Metrics Spoofing into a single overlay plot.
|
| 4 |
+
|
| 5 |
+
This produces the same 2x2 layout as the AI-AI Figure 3:
|
| 6 |
+
- SGI over turns (overlay by condition)
|
| 7 |
+
- Orbital velocity over turns (overlay by condition)
|
| 8 |
+
- Early turns (1-3) in phase space
|
| 9 |
+
- Late turns (last 3) in phase space
|
| 10 |
+
|
| 11 |
+
Inputs (expected in paper03/analysis/results):
|
| 12 |
+
- human_run_01_deep.json -> A_baseline
|
| 13 |
+
- 09_steering_2026-01-19_22-58-59_human_session.json -> E_real_metrics
|
| 14 |
+
- 09_steering_2026-01-20_13-27-51_human_session.json -> F_adversarial
|
| 15 |
+
|
| 16 |
+
Output (paper03/figures):
|
| 17 |
+
- FIG3_human_sessions_fig3_trajectory.png
|
| 18 |
+
"""
|
| 19 |
+
|
| 20 |
+
from __future__ import annotations
|
| 21 |
+
|
| 22 |
+
import json
|
| 23 |
+
from collections import defaultdict
|
| 24 |
+
from pathlib import Path
|
| 25 |
+
from typing import Any, Dict, List, Optional
|
| 26 |
+
|
| 27 |
+
|
| 28 |
+
SCRIPT_DIR = Path(__file__).parent
|
| 29 |
+
RESULTS_DIR = SCRIPT_DIR.parent / "results"
|
| 30 |
+
FIGURES_DIR = SCRIPT_DIR.parent.parent / "figures"
|
| 31 |
+
|
| 32 |
+
SESSION_FILES = [
|
| 33 |
+
RESULTS_DIR / "human_run_01_deep.json",
|
| 34 |
+
RESULTS_DIR / "09_steering_2026-01-19_22-58-59_human_session.json",
|
| 35 |
+
RESULTS_DIR / "09_steering_2026-01-20_13-27-51_human_session.json",
|
| 36 |
+
]
|
| 37 |
+
|
| 38 |
+
# Human-friendly condition labels for the paper figures
|
| 39 |
+
COND_LABELS = {
|
| 40 |
+
"A_baseline": "A (Deep Dive)",
|
| 41 |
+
"E_real_metrics": "E (Topic Switching)",
|
| 42 |
+
"F_adversarial": "F (Metrics Spoofing)",
|
| 43 |
+
}
|
| 44 |
+
|
| 45 |
+
COND_NOTES = [
|
| 46 |
+
"A: Deep Dive (human-led sustained topic)",
|
| 47 |
+
"E: Topic Switching (human-led sudden shifts)",
|
| 48 |
+
"F: Metrics Spoofing (injected adversarial telemetry; conversation remained normal/helpful)",
|
| 49 |
+
]
|
| 50 |
+
|
| 51 |
+
|
| 52 |
+
def _load_results(path: Path) -> List[Dict[str, Any]]:
|
| 53 |
+
with open(path, "r", encoding="utf-8") as f:
|
| 54 |
+
data = json.load(f)
|
| 55 |
+
if isinstance(data, dict) and "results" in data:
|
| 56 |
+
return data.get("results") or []
|
| 57 |
+
if isinstance(data, list):
|
| 58 |
+
return data
|
| 59 |
+
# fallback (unlikely): a single result dict
|
| 60 |
+
if isinstance(data, dict) and "turns" in data:
|
| 61 |
+
return [data]
|
| 62 |
+
return []
|
| 63 |
+
|
| 64 |
+
|
| 65 |
+
def _merge_human_sessions(paths: List[Path]) -> List[Dict[str, Any]]:
|
| 66 |
+
merged: List[Dict[str, Any]] = []
|
| 67 |
+
for p in paths:
|
| 68 |
+
if not p.exists():
|
| 69 |
+
print(f"[WARN] Missing session file: {p}")
|
| 70 |
+
continue
|
| 71 |
+
r = _load_results(p)
|
| 72 |
+
if not r:
|
| 73 |
+
print(f"[WARN] No results found in: {p}")
|
| 74 |
+
continue
|
| 75 |
+
# each human JSON should have one result (one condition)
|
| 76 |
+
merged.extend(r)
|
| 77 |
+
return merged
|
| 78 |
+
|
| 79 |
+
|
| 80 |
+
def render_fig3(results: List[Dict[str, Any]], output_png: Path) -> None:
|
| 81 |
+
try:
|
| 82 |
+
import matplotlib.pyplot as plt
|
| 83 |
+
import numpy as np
|
| 84 |
+
import seaborn as sns
|
| 85 |
+
except ImportError as e:
|
| 86 |
+
print(f"Visualization requires matplotlib, seaborn: {e}")
|
| 87 |
+
return
|
| 88 |
+
|
| 89 |
+
# Organize by condition (match 09_steering_experiment.py Fig3 behavior)
|
| 90 |
+
condition_data = defaultdict(lambda: {"sgis": [], "velocities": [], "turns": []})
|
| 91 |
+
|
| 92 |
+
for r in results:
|
| 93 |
+
cond = r.get("condition_name", "unknown")
|
| 94 |
+
turns = r.get("turns", []) or []
|
| 95 |
+
for t in turns:
|
| 96 |
+
turn_num = t.get("turn_number", 0)
|
| 97 |
+
rm = t.get("real_metrics", {}) or {}
|
| 98 |
+
|
| 99 |
+
# Prefer turn-pair metrics
|
| 100 |
+
sgi = rm.get("turn_pair_sgi_mean") or rm.get("turn_pair_sgi_latest") or rm.get("sgi_mean")
|
| 101 |
+
vel = rm.get("orbital_velocity_latest") or rm.get("orbital_velocity_mean") or rm.get("velocity_mean")
|
| 102 |
+
|
| 103 |
+
# mirror the filter used in Fig3 (avoid pathological 180 spikes from per-message only)
|
| 104 |
+
if sgi is not None and vel is not None and float(vel) < 170:
|
| 105 |
+
condition_data[cond]["sgis"].append(float(sgi))
|
| 106 |
+
condition_data[cond]["velocities"].append(float(vel))
|
| 107 |
+
condition_data[cond]["turns"].append(int(turn_num))
|
| 108 |
+
|
| 109 |
+
if not condition_data:
|
| 110 |
+
print("[Fig3 Human Merge] No valid trajectory data found")
|
| 111 |
+
return
|
| 112 |
+
|
| 113 |
+
colors = {
|
| 114 |
+
"A_baseline": "#3498db",
|
| 115 |
+
"E_real_metrics": "#1abc9c",
|
| 116 |
+
"F_adversarial": "#e74c3c",
|
| 117 |
+
}
|
| 118 |
+
|
| 119 |
+
sns.set_style("whitegrid")
|
| 120 |
+
fig, axes = plt.subplots(2, 2, figsize=(16, 14))
|
| 121 |
+
fig.suptitle(
|
| 122 |
+
"Figure 3 (Human-AI): Temporal Trajectory Analysis — Do Conditions Converge?",
|
| 123 |
+
fontsize=14,
|
| 124 |
+
fontweight="bold",
|
| 125 |
+
)
|
| 126 |
+
|
| 127 |
+
# Top-left: SGI over turns
|
| 128 |
+
ax1 = axes[0, 0]
|
| 129 |
+
for cond, data in condition_data.items():
|
| 130 |
+
if not data["turns"]:
|
| 131 |
+
continue
|
| 132 |
+
turn_sgi = defaultdict(list)
|
| 133 |
+
for t, s in zip(data["turns"], data["sgis"]):
|
| 134 |
+
turn_sgi[t].append(s)
|
| 135 |
+
turns_sorted = sorted(turn_sgi.keys())
|
| 136 |
+
sgi_means = [float(np.mean(turn_sgi[t])) for t in turns_sorted]
|
| 137 |
+
sgi_stds = [float(np.std(turn_sgi[t])) for t in turns_sorted]
|
| 138 |
+
color = colors.get(cond, "#7f8c8d")
|
| 139 |
+
label = COND_LABELS.get(cond, cond)
|
| 140 |
+
ax1.plot(turns_sorted, sgi_means, "o-", color=color, label=label, linewidth=2, markersize=8)
|
| 141 |
+
ax1.fill_between(
|
| 142 |
+
turns_sorted,
|
| 143 |
+
[m - s for m, s in zip(sgi_means, sgi_stds)],
|
| 144 |
+
[m + s for m, s in zip(sgi_means, sgi_stds)],
|
| 145 |
+
alpha=0.2,
|
| 146 |
+
color=color,
|
| 147 |
+
)
|
| 148 |
+
ax1.set_xlabel("Turn Number", fontsize=12)
|
| 149 |
+
ax1.set_ylabel("SGI (mean +/- std)", fontsize=12)
|
| 150 |
+
ax1.set_title("SGI Evolution Over Turns", fontsize=12)
|
| 151 |
+
ax1.set_ylim(0, 1.5)
|
| 152 |
+
ax1.legend(loc="upper right", fontsize=9)
|
| 153 |
+
|
| 154 |
+
# Top-right: Velocity over turns
|
| 155 |
+
ax2 = axes[0, 1]
|
| 156 |
+
for cond, data in condition_data.items():
|
| 157 |
+
if not data["turns"]:
|
| 158 |
+
continue
|
| 159 |
+
turn_vel = defaultdict(list)
|
| 160 |
+
for t, v in zip(data["turns"], data["velocities"]):
|
| 161 |
+
turn_vel[t].append(v)
|
| 162 |
+
turns_sorted = sorted(turn_vel.keys())
|
| 163 |
+
vel_means = [float(np.mean(turn_vel[t])) for t in turns_sorted]
|
| 164 |
+
vel_stds = [float(np.std(turn_vel[t])) for t in turns_sorted]
|
| 165 |
+
color = colors.get(cond, "#7f8c8d")
|
| 166 |
+
label = COND_LABELS.get(cond, cond)
|
| 167 |
+
ax2.plot(turns_sorted, vel_means, "s-", color=color, label=label, linewidth=2, markersize=8)
|
| 168 |
+
ax2.fill_between(
|
| 169 |
+
turns_sorted,
|
| 170 |
+
[m - s for m, s in zip(vel_means, vel_stds)],
|
| 171 |
+
[m + s for m, s in zip(vel_means, vel_stds)],
|
| 172 |
+
alpha=0.2,
|
| 173 |
+
color=color,
|
| 174 |
+
)
|
| 175 |
+
ax2.set_xlabel("Turn Number", fontsize=12)
|
| 176 |
+
ax2.set_ylabel("Orbital Velocity (mean +/- std)", fontsize=12)
|
| 177 |
+
ax2.set_title("Orbital Velocity (Turn-Pair) Over Turns", fontsize=12)
|
| 178 |
+
ax2.set_ylim(0, 180)
|
| 179 |
+
ax2.legend(loc="upper right", fontsize=9)
|
| 180 |
+
|
| 181 |
+
early_k = 3
|
| 182 |
+
late_k = 3
|
| 183 |
+
|
| 184 |
+
# Bottom-left: Early turns in phase space
|
| 185 |
+
ax3 = axes[1, 0]
|
| 186 |
+
ax3.add_patch(plt.Rectangle((0.3, 0), 0.9, 45, alpha=0.15, color="green", label="Coherence Region"))
|
| 187 |
+
for cond, data in condition_data.items():
|
| 188 |
+
early_sgi = [s for s, t in zip(data["sgis"], data["turns"]) if t <= early_k]
|
| 189 |
+
early_vel = [v for v, t in zip(data["velocities"], data["turns"]) if t <= early_k]
|
| 190 |
+
if not early_sgi:
|
| 191 |
+
continue
|
| 192 |
+
color = colors.get(cond, "#7f8c8d")
|
| 193 |
+
label = COND_LABELS.get(cond, cond)
|
| 194 |
+
ax3.scatter(early_sgi, early_vel, color=color, alpha=0.7, s=100, label=label, edgecolor="black")
|
| 195 |
+
ax3.scatter(
|
| 196 |
+
[float(np.mean(early_sgi))],
|
| 197 |
+
[float(np.mean(early_vel))],
|
| 198 |
+
color=color,
|
| 199 |
+
s=300,
|
| 200 |
+
marker="*",
|
| 201 |
+
edgecolor="black",
|
| 202 |
+
linewidth=2,
|
| 203 |
+
)
|
| 204 |
+
ax3.set_xlabel("SGI", fontsize=12)
|
| 205 |
+
ax3.set_ylabel("Orbital Velocity (degrees)", fontsize=12)
|
| 206 |
+
ax3.set_title(f"Early Turns (1-{early_k}): Where Do Conditions START?", fontsize=12)
|
| 207 |
+
ax3.set_xlim(0, 1.5)
|
| 208 |
+
ax3.set_ylim(0, 180)
|
| 209 |
+
ax3.legend(loc="upper right", fontsize=9)
|
| 210 |
+
|
| 211 |
+
# Bottom-right: Late turns (last k) in phase space (per condition)
|
| 212 |
+
ax4 = axes[1, 1]
|
| 213 |
+
ax4.add_patch(plt.Rectangle((0.3, 0), 0.9, 45, alpha=0.15, color="green", label="Coherence Region"))
|
| 214 |
+
for cond, data in condition_data.items():
|
| 215 |
+
valid_turns = [t for t in data["turns"] if isinstance(t, int)]
|
| 216 |
+
if not valid_turns:
|
| 217 |
+
continue
|
| 218 |
+
max_turn = max(valid_turns)
|
| 219 |
+
cutoff = max_turn - late_k + 1
|
| 220 |
+
late_sgi = [s for s, t in zip(data["sgis"], data["turns"]) if t >= cutoff]
|
| 221 |
+
late_vel = [v for v, t in zip(data["velocities"], data["turns"]) if t >= cutoff]
|
| 222 |
+
if not late_sgi:
|
| 223 |
+
continue
|
| 224 |
+
color = colors.get(cond, "#7f8c8d")
|
| 225 |
+
label = COND_LABELS.get(cond, cond)
|
| 226 |
+
ax4.scatter(late_sgi, late_vel, color=color, alpha=0.7, s=100, label=label, edgecolor="black")
|
| 227 |
+
ax4.scatter(
|
| 228 |
+
[float(np.mean(late_sgi))],
|
| 229 |
+
[float(np.mean(late_vel))],
|
| 230 |
+
color=color,
|
| 231 |
+
s=300,
|
| 232 |
+
marker="*",
|
| 233 |
+
edgecolor="black",
|
| 234 |
+
linewidth=2,
|
| 235 |
+
)
|
| 236 |
+
ax4.set_xlabel("SGI", fontsize=12)
|
| 237 |
+
ax4.set_ylabel("Orbital Velocity (degrees)", fontsize=12)
|
| 238 |
+
ax4.set_title(f"Late Turns (last {late_k}): Where Do Conditions END?", fontsize=12)
|
| 239 |
+
ax4.set_xlim(0, 1.5)
|
| 240 |
+
ax4.set_ylim(0, 180)
|
| 241 |
+
ax4.legend(loc="upper right", fontsize=9)
|
| 242 |
+
|
| 243 |
+
# Add human-session context note
|
| 244 |
+
fig.text(
|
| 245 |
+
0.5,
|
| 246 |
+
0.01,
|
| 247 |
+
" | ".join(COND_NOTES),
|
| 248 |
+
ha="center",
|
| 249 |
+
va="bottom",
|
| 250 |
+
fontsize=9,
|
| 251 |
+
color="#333333",
|
| 252 |
+
)
|
| 253 |
+
|
| 254 |
+
plt.tight_layout(rect=[0, 0.03, 1, 0.96])
|
| 255 |
+
|
| 256 |
+
output_png.parent.mkdir(parents=True, exist_ok=True)
|
| 257 |
+
plt.savefig(output_png, dpi=300, bbox_inches="tight", facecolor="white")
|
| 258 |
+
print(f"[OK] Saved merged human Fig3 to: {output_png}")
|
| 259 |
+
plt.show()
|
| 260 |
+
|
| 261 |
+
|
| 262 |
+
def main() -> None:
|
| 263 |
+
merged = _merge_human_sessions(SESSION_FILES)
|
| 264 |
+
if not merged:
|
| 265 |
+
print("[ERROR] No sessions loaded; cannot render merged Fig3.")
|
| 266 |
+
raise SystemExit(1)
|
| 267 |
+
|
| 268 |
+
out = FIGURES_DIR / "FIG3_human_sessions_fig3_trajectory.png"
|
| 269 |
+
render_fig3(merged, out)
|
| 270 |
+
|
| 271 |
+
|
| 272 |
+
if __name__ == "__main__":
|
| 273 |
+
main()
|
| 274 |
+
|
| 275 |
+
|
open_source/README.md
ADDED
|
@@ -0,0 +1,127 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# AICoevolution Semantic Telemetry
|
| 2 |
+
|
| 3 |
+
**Measure conversational dynamics in real-time.**
|
| 4 |
+
|
| 5 |
+
This open-source script demonstrates the core capabilities of the AICoevolution SDK for measuring semantic dynamics in human-AI conversations.
|
| 6 |
+
|
| 7 |
+
## What It Measures
|
| 8 |
+
|
| 9 |
+
| Metric | Description | Range |
|
| 10 |
+
|--------|-------------|-------|
|
| 11 |
+
| **SGI (Orbital Radius)** | How balanced between query responsiveness and context grounding | ~0.5-1.5 |
|
| 12 |
+
| **Velocity** | Rate of semantic movement per turn | 0-180° |
|
| 13 |
+
| **Context Phase** | Topic coherence state | stable / protostar / split |
|
| 14 |
+
| **Context Mass** | Accumulated turns in current topic | 0-N |
|
| 15 |
+
| **Attractor Count** | Number of competing topic centers | 1+ |
|
| 16 |
+
|
| 17 |
+
## Quick Start
|
| 18 |
+
|
| 19 |
+
### 1. Install Dependencies
|
| 20 |
+
|
| 21 |
+
```bash
|
| 22 |
+
pip install requests
|
| 23 |
+
```
|
| 24 |
+
|
| 25 |
+
### 2. Get an API Key
|
| 26 |
+
|
| 27 |
+
Get a free API key by loging in and generate it from the User settings/API Keys [https://aicoevolution.com](https://aicoevolution.com)
|
| 28 |
+
|
| 29 |
+
### 3. Run the standalone script
|
| 30 |
+
|
| 31 |
+
```bash
|
| 32 |
+
# With cloud API
|
| 33 |
+
python semantic_telemetry.py --api-key YOUR_API_KEY
|
| 34 |
+
|
| 35 |
+
# Optional: custom SDK URL (e.g. local self-hosted)
|
| 36 |
+
python semantic_telemetry.py --api-key YOUR_API_KEY --url http://localhost:8001
|
| 37 |
+
|
| 38 |
+
# Custom turns
|
| 39 |
+
python semantic_telemetry.py --api-key YOUR_API_KEY --turns 20
|
| 40 |
+
```
|
| 41 |
+
|
| 42 |
+
## Minimal thin-client example (PyPI)
|
| 43 |
+
|
| 44 |
+
If you prefer integrating the SDK via a Python dependency:
|
| 45 |
+
|
| 46 |
+
```bash
|
| 47 |
+
pip install aicoevolution
|
| 48 |
+
set AIC_SDK_API_KEY=aic_.....
|
| 49 |
+
python hello_aicoevolution.py
|
| 50 |
+
```
|
| 51 |
+
|
| 52 |
+
## Example Output
|
| 53 |
+
|
| 54 |
+
```
|
| 55 |
+
╔═══════════════════════════════════════════════════════════════════════════════╗
|
| 56 |
+
║ SEMANTIC TELEMETRY - AICoevolution SDK ║
|
| 57 |
+
╚═══════════════════════════════════════════════════════════════════════════════╝
|
| 58 |
+
|
| 59 |
+
──────────────────────────────────────────────────
|
| 60 |
+
Turn 5/10
|
| 61 |
+
──────────────────────────────────────────────────
|
| 62 |
+
|
| 63 |
+
[YOU]: I'm trying to understand how meaning emerges in conversation
|
| 64 |
+
[SDK] <- OK | SGI=0.952, Velocity=34.2°
|
| 65 |
+
|
| 66 |
+
[AI]: Meaning emerges through the dynamic interplay between what's said and the accumulated context...
|
| 67 |
+
[SDK] <- OK | SGI=0.891, Velocity=28.7°
|
| 68 |
+
|
| 69 |
+
────────────────────────────── SEMANTIC METRICS ──────────────────────────────
|
| 70 |
+
SGI (Orbital Radius): 0.891
|
| 71 |
+
Velocity (degrees): 28.7°
|
| 72 |
+
Context ID: ctx_1
|
| 73 |
+
Context State: stable
|
| 74 |
+
Attractor Count: 1
|
| 75 |
+
Active Context Mass: 5 turns
|
| 76 |
+
──────────────────────────────────────────────────────────────────────
|
| 77 |
+
```
|
| 78 |
+
|
| 79 |
+
## Understanding the Metrics
|
| 80 |
+
|
| 81 |
+
### Coherence Region
|
| 82 |
+
Productive conversations tend to occupy:
|
| 83 |
+
- **SGI**: 0.7 - 1.3 (balanced orbit)
|
| 84 |
+
- **Velocity**: 15° - 60° (productive movement)
|
| 85 |
+
|
| 86 |
+
### Context Phases
|
| 87 |
+
- **stable** (🟢): Conversation anchored to current topic
|
| 88 |
+
- **protostar** (🟠): New topic forming, may switch soon
|
| 89 |
+
- **split** (🔴): Topic changed, new context promoted
|
| 90 |
+
|
| 91 |
+
### Orbital Energy
|
| 92 |
+
`E_orb = SGI × Velocity`
|
| 93 |
+
- Higher = more dynamic, potentially unstable
|
| 94 |
+
- Lower = more grounded, potentially stagnant
|
| 95 |
+
|
| 96 |
+
## Under Development
|
| 97 |
+
|
| 98 |
+
The following metrics are planned for future releases:
|
| 99 |
+
|
| 100 |
+
- **Domain Distribution**: Cognitive / Somatic / Emotional / Volitional balance
|
| 101 |
+
- **Symbolic Depth (S64)**: Transformation path detection
|
| 102 |
+
- **Mass Contribution**: Who is steering the conversation?
|
| 103 |
+
- **Grammatical Hierarchy**: Geometric grammar detection
|
| 104 |
+
|
| 105 |
+
## Learn More
|
| 106 |
+
|
| 107 |
+
- **Paper**: "Semantic Orbital Mechanics" (Jimenez Sanchez, 2026)
|
| 108 |
+
- **Website**: [https://aicoevolution.com](https://aicoevolution.com)
|
| 109 |
+
- **SDK Documentation**: [https://docs.aicoevolution.com](https://docs.aicoevolution.com)
|
| 110 |
+
|
| 111 |
+
## License
|
| 112 |
+
|
| 113 |
+
MIT License - Use freely for research and commercial applications.
|
| 114 |
+
|
| 115 |
+
## Citation
|
| 116 |
+
|
| 117 |
+
If you use this in research, please cite:
|
| 118 |
+
|
| 119 |
+
```bibtex
|
| 120 |
+
@article{jimenez2026orbital,
|
| 121 |
+
title={Semantic Orbital Mechanics: Measuring and Guiding AI Conversation Dynamics},
|
| 122 |
+
author={Jimenez Sanchez, Juan Jacobo},
|
| 123 |
+
journal={arXiv preprint},
|
| 124 |
+
year={2026}
|
| 125 |
+
}
|
| 126 |
+
```
|
| 127 |
+
|
open_source/hello_aicoevolution.py
ADDED
|
@@ -0,0 +1,114 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""
|
| 3 |
+
hello_aicoevolution.py
|
| 4 |
+
======================
|
| 5 |
+
|
| 6 |
+
Minimal working example for the PyPI package:
|
| 7 |
+
|
| 8 |
+
pip install aicoevolution
|
| 9 |
+
|
| 10 |
+
Usage (recommended):
|
| 11 |
+
# Windows PowerShell:
|
| 12 |
+
# $env:AIC_SDK_API_KEY="aic_..."
|
| 13 |
+
# cmd.exe:
|
| 14 |
+
# set AIC_SDK_API_KEY=aic_...
|
| 15 |
+
|
| 16 |
+
python hello_aicoevolution.py
|
| 17 |
+
|
| 18 |
+
Optional:
|
| 19 |
+
set AIC_SDK_URL=https://sdk.aicoevolution.com
|
| 20 |
+
"""
|
| 21 |
+
|
| 22 |
+
from __future__ import annotations
|
| 23 |
+
|
| 24 |
+
import os
|
| 25 |
+
import time
|
| 26 |
+
import uuid
|
| 27 |
+
|
| 28 |
+
# This script is intended to be run on any machine after:
|
| 29 |
+
# pip install aicoevolution
|
| 30 |
+
#
|
| 31 |
+
# Note: If you run this file *inside the MirrorMind monorepo*, Python may import
|
| 32 |
+
# the local `aicoevolution_sdk/` package instead of the PyPI thin client module.
|
| 33 |
+
# For end users, copy this file into a normal folder (or a fresh venv) and run it there.
|
| 34 |
+
from aicoevolution_sdk import AICoevolutionClient, SDKAuth
|
| 35 |
+
|
| 36 |
+
|
| 37 |
+
def _print_metrics(label: str, resp: dict) -> None:
|
| 38 |
+
"""Print SDK metrics with Paper 03 canonical extraction.
|
| 39 |
+
|
| 40 |
+
Paper 03 metrics:
|
| 41 |
+
- SGI: turn_pair_sgi_latest (or fallback to sgi_latest)
|
| 42 |
+
- Velocity: orbital_velocity_latest (turn-pair, ~25-45°)
|
| 43 |
+
Fallback: angular_velocity_latest (per-message, ~75-180°)
|
| 44 |
+
|
| 45 |
+
The SDK now exposes these at top level for easy access.
|
| 46 |
+
"""
|
| 47 |
+
print(f"\n[{label}]")
|
| 48 |
+
print("conversation_id:", resp.get("conversation_id"))
|
| 49 |
+
print("message_count:", resp.get("message_count"))
|
| 50 |
+
|
| 51 |
+
# Paper 03: Prefer turn-pair SGI when available (now at top level)
|
| 52 |
+
sgi = resp.get("turn_pair_sgi_latest") or resp.get("sgi_latest")
|
| 53 |
+
|
| 54 |
+
# Paper 03: Prefer orbital velocity (turn-pair, ~25-45°) over angular (per-message, ~75-180°)
|
| 55 |
+
velocity = resp.get("orbital_velocity_latest") or resp.get("angular_velocity_latest")
|
| 56 |
+
|
| 57 |
+
# Core telemetry fields (may be None early in a conversation)
|
| 58 |
+
print("sgi_latest:", sgi)
|
| 59 |
+
print("orbital_velocity_latest:", velocity)
|
| 60 |
+
print("context_phase:", resp.get("context_phase"))
|
| 61 |
+
print("context_mass:", resp.get("context_mass"))
|
| 62 |
+
print("attractor_count:", resp.get("attractor_count"))
|
| 63 |
+
|
| 64 |
+
quota = resp.get("quota")
|
| 65 |
+
if isinstance(quota, dict):
|
| 66 |
+
remaining_units = quota.get("remaining")
|
| 67 |
+
try:
|
| 68 |
+
remaining_units_int = int(remaining_units)
|
| 69 |
+
turns_left = max(0, remaining_units_int // 2) # ~2 ingest calls per Q/A turn
|
| 70 |
+
print("turns_left_this_week:", turns_left)
|
| 71 |
+
except Exception:
|
| 72 |
+
# If quota is unlimited (-1) or missing, just print the raw object.
|
| 73 |
+
print("quota:", quota)
|
| 74 |
+
|
| 75 |
+
|
| 76 |
+
def main() -> None:
|
| 77 |
+
api_key = (os.getenv("AIC_SDK_API_KEY") or "").strip()
|
| 78 |
+
if not api_key:
|
| 79 |
+
raise SystemExit("Missing AIC_SDK_API_KEY. Set it to your aic_... key and retry.")
|
| 80 |
+
|
| 81 |
+
base_url = (os.getenv("AIC_SDK_URL") or "https://sdk.aicoevolution.com").strip()
|
| 82 |
+
|
| 83 |
+
sdk = AICoevolutionClient(
|
| 84 |
+
base_url=base_url,
|
| 85 |
+
auth=SDKAuth(user_api_key=api_key),
|
| 86 |
+
timeout_s=30.0,
|
| 87 |
+
)
|
| 88 |
+
|
| 89 |
+
conversation_id = f"hello_{uuid.uuid4().hex[:8]}"
|
| 90 |
+
now_ms = lambda: int(time.time() * 1000)
|
| 91 |
+
|
| 92 |
+
# 1) Ingest user message
|
| 93 |
+
r1 = sdk.ingest(
|
| 94 |
+
conversation_id=conversation_id,
|
| 95 |
+
role="user",
|
| 96 |
+
text="Hello — I'm testing AICoevolution semantic telemetry.",
|
| 97 |
+
timestamp_ms=now_ms(),
|
| 98 |
+
)
|
| 99 |
+
_print_metrics("USER INGEST", r1)
|
| 100 |
+
|
| 101 |
+
# 2) Ingest assistant message (this is when SGI/velocity typically becomes meaningful)
|
| 102 |
+
r2 = sdk.ingest(
|
| 103 |
+
conversation_id=conversation_id,
|
| 104 |
+
role="assistant",
|
| 105 |
+
text="Great. Tell me what topic you'd like to explore and I’ll respond concisely.",
|
| 106 |
+
timestamp_ms=now_ms(),
|
| 107 |
+
)
|
| 108 |
+
_print_metrics("ASSISTANT INGEST", r2)
|
| 109 |
+
|
| 110 |
+
|
| 111 |
+
if __name__ == "__main__":
|
| 112 |
+
main()
|
| 113 |
+
|
| 114 |
+
|
open_source/semantic_telemetry.py
ADDED
|
@@ -0,0 +1,437 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""
|
| 3 |
+
AICoevolution Semantic Telemetry (Production)
|
| 4 |
+
=============================================
|
| 5 |
+
A lightweight tool to measure semantic dynamics in human-AI conversations.
|
| 6 |
+
This version is optimized for production use with the AICoevolution Cloud SDK.
|
| 7 |
+
|
| 8 |
+
Usage:
|
| 9 |
+
python semantic_telemetry_prod.py --api-key aic_...
|
| 10 |
+
python semantic_telemetry_prod.py --api-key aic_... --hosted-ai
|
| 11 |
+
|
| 12 |
+
Requirements:
|
| 13 |
+
pip install requests
|
| 14 |
+
"""
|
| 15 |
+
|
| 16 |
+
import argparse
|
| 17 |
+
import json
|
| 18 |
+
import os
|
| 19 |
+
import sys
|
| 20 |
+
import time
|
| 21 |
+
import uuid
|
| 22 |
+
from dataclasses import dataclass
|
| 23 |
+
from typing import Any, Dict, List, Optional
|
| 24 |
+
|
| 25 |
+
# =============================================================================
|
| 26 |
+
# CONFIGURATION
|
| 27 |
+
# =============================================================================
|
| 28 |
+
|
| 29 |
+
DEFAULT_SDK_URL = "https://sdk.aicoevolution.com"
|
| 30 |
+
|
| 31 |
+
# ANSI colors for terminal output
|
| 32 |
+
class Colors:
|
| 33 |
+
HEADER = '\033[95m'
|
| 34 |
+
BLUE = '\033[94m'
|
| 35 |
+
CYAN = '\033[96m'
|
| 36 |
+
GREEN = '\033[92m'
|
| 37 |
+
YELLOW = '\033[93m'
|
| 38 |
+
RED = '\033[91m'
|
| 39 |
+
ENDC = '\033[0m'
|
| 40 |
+
BOLD = '\033[1m'
|
| 41 |
+
UNDERLINE = '\033[4m'
|
| 42 |
+
|
| 43 |
+
def _disable_colors():
|
| 44 |
+
"""Disable ANSI colors (prints become plain text)."""
|
| 45 |
+
for k in list(Colors.__dict__.keys()):
|
| 46 |
+
if k.isupper():
|
| 47 |
+
setattr(Colors, k, "")
|
| 48 |
+
|
| 49 |
+
def _init_console_colors(enable: bool) -> None:
|
| 50 |
+
"""
|
| 51 |
+
Ensure colors render on Windows terminals.
|
| 52 |
+
If color support isn't available, fall back to plain text.
|
| 53 |
+
"""
|
| 54 |
+
if not enable or os.getenv("NO_COLOR") or os.getenv("AIC_NO_COLOR"):
|
| 55 |
+
_disable_colors()
|
| 56 |
+
return
|
| 57 |
+
|
| 58 |
+
if os.name == "nt":
|
| 59 |
+
try:
|
| 60 |
+
import colorama # type: ignore
|
| 61 |
+
# Translates ANSI escapes for Windows consoles.
|
| 62 |
+
try:
|
| 63 |
+
colorama.just_fix_windows_console()
|
| 64 |
+
except Exception:
|
| 65 |
+
colorama.init()
|
| 66 |
+
except Exception:
|
| 67 |
+
_disable_colors()
|
| 68 |
+
|
| 69 |
+
# =============================================================================
|
| 70 |
+
# DATA STRUCTURES
|
| 71 |
+
# =============================================================================
|
| 72 |
+
|
| 73 |
+
@dataclass
|
| 74 |
+
class TelemetryMetrics:
|
| 75 |
+
"""Standardized metrics from SDK response."""
|
| 76 |
+
sgi: Optional[float]
|
| 77 |
+
velocity: Optional[float]
|
| 78 |
+
context_phase: str
|
| 79 |
+
context_mass: int
|
| 80 |
+
attractor_count: int
|
| 81 |
+
context_drift: float
|
| 82 |
+
processing_time_ms: int
|
| 83 |
+
|
| 84 |
+
@dataclass
|
| 85 |
+
class CoevolutionIndex:
|
| 86 |
+
"""Computed Coevolution Index (CI)."""
|
| 87 |
+
coevolution_index: float
|
| 88 |
+
tier: str # BASIC | ELEVATED | HIGH
|
| 89 |
+
horizontal_score: float
|
| 90 |
+
vertical_score: float
|
| 91 |
+
|
| 92 |
+
# Components
|
| 93 |
+
coherence_region_occupancy: float
|
| 94 |
+
dyadic_coherence_index: float
|
| 95 |
+
context_stability: float
|
| 96 |
+
symbolic_entropy: float
|
| 97 |
+
path_transformation_density: float
|
| 98 |
+
domain_balance_index: float
|
| 99 |
+
|
| 100 |
+
# =============================================================================
|
| 101 |
+
# SDK CLIENT
|
| 102 |
+
# =============================================================================
|
| 103 |
+
|
| 104 |
+
class SemanticTelemetryClient:
|
| 105 |
+
"""Client for interacting with the AICoevolution SDK."""
|
| 106 |
+
|
| 107 |
+
def __init__(self, api_key: str, base_url: str = DEFAULT_SDK_URL):
|
| 108 |
+
self.api_key = api_key
|
| 109 |
+
self.base_url = base_url.rstrip("/")
|
| 110 |
+
self.conversation_id = str(uuid.uuid4())
|
| 111 |
+
self.message_count = 0
|
| 112 |
+
self.messages = [] # Local history for hosted AI context
|
| 113 |
+
|
| 114 |
+
# Validate connection on init
|
| 115 |
+
try:
|
| 116 |
+
import requests
|
| 117 |
+
self.session = requests.Session()
|
| 118 |
+
if api_key:
|
| 119 |
+
self.session.headers.update({"Authorization": f"Bearer {api_key}"})
|
| 120 |
+
except ImportError:
|
| 121 |
+
print("Error: 'requests' library not found.")
|
| 122 |
+
print("Please install it: pip install requests")
|
| 123 |
+
sys.exit(1)
|
| 124 |
+
|
| 125 |
+
def ingest_message(self, role: str, text: str) -> Optional[Dict[str, Any]]:
|
| 126 |
+
"""Send a message to the SDK for telemetry analysis."""
|
| 127 |
+
url = f"{self.base_url}/v0/ingest"
|
| 128 |
+
payload = {
|
| 129 |
+
"conversation_id": self.conversation_id,
|
| 130 |
+
"role": role,
|
| 131 |
+
"text": text,
|
| 132 |
+
"timestamp_ms": int(time.time() * 1000)
|
| 133 |
+
}
|
| 134 |
+
|
| 135 |
+
try:
|
| 136 |
+
response = self.session.post(url, json=payload, timeout=10)
|
| 137 |
+
if response.status_code == 401:
|
| 138 |
+
# FastAPI typically returns JSON {"detail": ...}; include body when present.
|
| 139 |
+
body = (response.text or "").strip()
|
| 140 |
+
if body:
|
| 141 |
+
body = body[:800] + ("..." if len(body) > 800 else "")
|
| 142 |
+
print(f"\n{Colors.RED}[SDK Error] Invalid API Key :: {body}{Colors.ENDC}")
|
| 143 |
+
else:
|
| 144 |
+
print(f"\n{Colors.RED}[SDK Error] Invalid API Key.{Colors.ENDC}")
|
| 145 |
+
return None
|
| 146 |
+
response.raise_for_status()
|
| 147 |
+
self.message_count += 1
|
| 148 |
+
return response.json()
|
| 149 |
+
except Exception as e:
|
| 150 |
+
# Surface server-provided details where possible.
|
| 151 |
+
resp = getattr(e, "response", None)
|
| 152 |
+
if resp is not None:
|
| 153 |
+
body = (resp.text or "").strip()
|
| 154 |
+
body = body[:800] + ("..." if len(body) > 800 else "")
|
| 155 |
+
if body:
|
| 156 |
+
print(f"\n{Colors.RED}[SDK Error] Connection failed: {e} :: {body}{Colors.ENDC}")
|
| 157 |
+
return None
|
| 158 |
+
print(f"\n{Colors.RED}[SDK Error] Connection failed: {e}{Colors.ENDC}")
|
| 159 |
+
return None
|
| 160 |
+
|
| 161 |
+
def hosted_chat(self, user_message: str) -> Optional[Dict[str, Any]]:
|
| 162 |
+
"""Send message to hosted AI endpoint (paid tier only)."""
|
| 163 |
+
url = f"{self.base_url}/v0/chat"
|
| 164 |
+
payload = {
|
| 165 |
+
"message": user_message,
|
| 166 |
+
"conversation_id": self.conversation_id,
|
| 167 |
+
"messages": self.messages[-10:] # Send recent context
|
| 168 |
+
}
|
| 169 |
+
|
| 170 |
+
try:
|
| 171 |
+
# Hosted AI can take longer (LLM generation)
|
| 172 |
+
response = self.session.post(url, json=payload, timeout=60)
|
| 173 |
+
|
| 174 |
+
if response.status_code == 401:
|
| 175 |
+
body = (response.text or "").strip()
|
| 176 |
+
if body:
|
| 177 |
+
body = body[:800] + ("..." if len(body) > 800 else "")
|
| 178 |
+
print(f"\n{Colors.RED}[SDK Error] Unauthorized :: {body}{Colors.ENDC}")
|
| 179 |
+
else:
|
| 180 |
+
print(f"\n{Colors.RED}[SDK Error] Unauthorized. Hosted AI requires a paid API key.{Colors.ENDC}")
|
| 181 |
+
return None
|
| 182 |
+
elif response.status_code == 402:
|
| 183 |
+
body = (response.text or "").strip()
|
| 184 |
+
if body:
|
| 185 |
+
body = body[:800] + ("..." if len(body) > 800 else "")
|
| 186 |
+
print(f"\n{Colors.RED}[SDK Error] Payment required :: {body}{Colors.ENDC}")
|
| 187 |
+
else:
|
| 188 |
+
print(f"\n{Colors.RED}[SDK Error] Payment required. Upgrade to a paid tier for Hosted AI.{Colors.ENDC}")
|
| 189 |
+
return None
|
| 190 |
+
|
| 191 |
+
response.raise_for_status()
|
| 192 |
+
self.message_count += 2 # User + Assistant
|
| 193 |
+
return response.json()
|
| 194 |
+
except Exception as e:
|
| 195 |
+
resp = getattr(e, "response", None)
|
| 196 |
+
if resp is not None:
|
| 197 |
+
body = (resp.text or "").strip()
|
| 198 |
+
body = body[:800] + ("..." if len(body) > 800 else "")
|
| 199 |
+
if body:
|
| 200 |
+
print(f"\n{Colors.RED}[SDK Error] Chat request failed: {e} :: {body}{Colors.ENDC}")
|
| 201 |
+
return None
|
| 202 |
+
print(f"\n{Colors.RED}[SDK Error] Chat request failed: {e}{Colors.ENDC}")
|
| 203 |
+
return None
|
| 204 |
+
|
| 205 |
+
def extract_metrics(self, response: Dict[str, Any]) -> TelemetryMetrics:
|
| 206 |
+
"""Extract standardized metrics from SDK response.
|
| 207 |
+
|
| 208 |
+
Paper 03 canonical metrics:
|
| 209 |
+
- SGI: turn_pair_sgi_latest (or fallback to sgi_latest)
|
| 210 |
+
- Velocity: orbital_velocity_latest (turn-pair, ~25-45°)
|
| 211 |
+
Fallback: angular_velocity_latest (per-message, ~75-180°)
|
| 212 |
+
|
| 213 |
+
Turn-pair metrics have lower variance and are the canonical choice.
|
| 214 |
+
The SDK now exposes these at top level for easy access.
|
| 215 |
+
"""
|
| 216 |
+
# Paper 03: Prefer turn-pair SGI when available (now at top level)
|
| 217 |
+
sgi = (
|
| 218 |
+
response.get("turn_pair_sgi_latest")
|
| 219 |
+
or response.get("sgi_latest")
|
| 220 |
+
)
|
| 221 |
+
|
| 222 |
+
# Paper 03: Prefer orbital velocity (turn-pair, ~25-45°) over angular (per-message, ~75-180°)
|
| 223 |
+
velocity = (
|
| 224 |
+
response.get("orbital_velocity_latest")
|
| 225 |
+
or response.get("angular_velocity_latest")
|
| 226 |
+
)
|
| 227 |
+
|
| 228 |
+
return TelemetryMetrics(
|
| 229 |
+
sgi=sgi,
|
| 230 |
+
velocity=velocity,
|
| 231 |
+
context_phase=response.get("context_phase", "stable"),
|
| 232 |
+
context_mass=response.get("context_mass", 0),
|
| 233 |
+
attractor_count=response.get("attractor_count", 1),
|
| 234 |
+
context_drift=response.get("context_drift", 0.0),
|
| 235 |
+
processing_time_ms=response.get("processing_time_ms", 0)
|
| 236 |
+
)
|
| 237 |
+
|
| 238 |
+
# =============================================================================
|
| 239 |
+
# COEVOLUTION TRACKER (Client-Side Logic)
|
| 240 |
+
# =============================================================================
|
| 241 |
+
|
| 242 |
+
class CoevolutionTracker:
|
| 243 |
+
"""Tracks session dynamics to compute the Coevolution Index."""
|
| 244 |
+
|
| 245 |
+
def __init__(self):
|
| 246 |
+
self.turns: List[TelemetryMetrics] = []
|
| 247 |
+
self.sgi_history: List[float] = []
|
| 248 |
+
self.velocity_history: List[float] = []
|
| 249 |
+
|
| 250 |
+
def add_turn(self, metrics: TelemetryMetrics):
|
| 251 |
+
self.turns.append(metrics)
|
| 252 |
+
if metrics.sgi is not None:
|
| 253 |
+
self.sgi_history.append(metrics.sgi)
|
| 254 |
+
if metrics.velocity is not None:
|
| 255 |
+
self.velocity_history.append(metrics.velocity)
|
| 256 |
+
|
| 257 |
+
def compute_index(self) -> CoevolutionIndex:
|
| 258 |
+
"""Compute the Coevolution Index based on accumulated history."""
|
| 259 |
+
if not self.turns:
|
| 260 |
+
return CoevolutionIndex(0, "BASIC", 0, 0, 0, 0, 0, 0, 0, 0)
|
| 261 |
+
|
| 262 |
+
# 1. Horizontal Score (Hs) - Dynamics
|
| 263 |
+
# Coherence Region: SGI > 0.6 AND Velocity < 30 deg
|
| 264 |
+
coherence_count = sum(1 for s, v in zip(self.sgi_history, self.velocity_history)
|
| 265 |
+
if s > 0.6 and v < 30.0)
|
| 266 |
+
coherence_occupancy = coherence_count / len(self.turns) if self.turns else 0
|
| 267 |
+
|
| 268 |
+
# Dyadic Coherence: Mean SGI
|
| 269 |
+
dyadic_coherence = sum(self.sgi_history) / len(self.sgi_history) if self.sgi_history else 0
|
| 270 |
+
|
| 271 |
+
# Context Stability: 1.0 - normalized drift
|
| 272 |
+
avg_drift = sum(t.context_drift for t in self.turns) / len(self.turns) if self.turns else 0
|
| 273 |
+
context_stability = max(0.0, 1.0 - (avg_drift / 100.0))
|
| 274 |
+
|
| 275 |
+
hs = (coherence_occupancy * 0.4) + (dyadic_coherence * 0.4) + (context_stability * 0.2)
|
| 276 |
+
|
| 277 |
+
# 2. Vertical Score (Vs) - Depth (Simplified for Prod)
|
| 278 |
+
# In production script without S64/Stage1, we use placeholders or simplified proxies
|
| 279 |
+
# For now, we fix these to baseline values as full S64 is in the Dev script
|
| 280 |
+
symbolic_entropy = 0.5
|
| 281 |
+
path_density = 0.0
|
| 282 |
+
domain_balance = 0.5
|
| 283 |
+
|
| 284 |
+
vs = (symbolic_entropy * 0.3) + (path_density * 0.4) + (domain_balance * 0.3)
|
| 285 |
+
|
| 286 |
+
# 3. Coevolution Index
|
| 287 |
+
ci = (hs * 0.6) + (vs * 0.4)
|
| 288 |
+
|
| 289 |
+
# Tier
|
| 290 |
+
if ci >= 0.7: tier = "HIGH"
|
| 291 |
+
elif ci >= 0.4: tier = "ELEVATED"
|
| 292 |
+
else: tier = "BASIC"
|
| 293 |
+
|
| 294 |
+
return CoevolutionIndex(
|
| 295 |
+
coevolution_index=ci,
|
| 296 |
+
tier=tier,
|
| 297 |
+
horizontal_score=hs,
|
| 298 |
+
vertical_score=vs,
|
| 299 |
+
coherence_region_occupancy=coherence_occupancy,
|
| 300 |
+
dyadic_coherence_index=dyadic_coherence,
|
| 301 |
+
context_stability=context_stability,
|
| 302 |
+
symbolic_entropy=symbolic_entropy,
|
| 303 |
+
path_transformation_density=path_density,
|
| 304 |
+
domain_balance_index=domain_balance
|
| 305 |
+
)
|
| 306 |
+
|
| 307 |
+
# =============================================================================
|
| 308 |
+
# UI HELPERS
|
| 309 |
+
# =============================================================================
|
| 310 |
+
|
| 311 |
+
def print_header():
|
| 312 |
+
print(f"\n{Colors.CYAN}{Colors.BOLD}")
|
| 313 |
+
print("╔══════════════════════════════════════════════════════════════╗")
|
| 314 |
+
print("║ AICoevolution Semantic Telemetry (v1.0) ║")
|
| 315 |
+
print("╚══════════════════════════════════════════════════════════════╝")
|
| 316 |
+
print(f"{Colors.ENDC}")
|
| 317 |
+
|
| 318 |
+
def print_metrics(m: TelemetryMetrics):
|
| 319 |
+
print(f"{Colors.BLUE} Metrics:{Colors.ENDC}")
|
| 320 |
+
|
| 321 |
+
# SGI
|
| 322 |
+
sgi_color = Colors.GREEN if (m.sgi or 0) > 0.7 else Colors.YELLOW if (m.sgi or 0) > 0.4 else Colors.RED
|
| 323 |
+
print(f" SGI: {sgi_color}{m.sgi:.3f}{Colors.ENDC}" if m.sgi is not None else " SGI: N/A")
|
| 324 |
+
|
| 325 |
+
# Velocity
|
| 326 |
+
vel_color = Colors.GREEN if (m.velocity or 0) < 15 else Colors.YELLOW if (m.velocity or 0) < 45 else Colors.RED
|
| 327 |
+
print(f" Velocity: {vel_color}{m.velocity:.1f}°{Colors.ENDC}" if m.velocity is not None else " Velocity: N/A")
|
| 328 |
+
|
| 329 |
+
# Context
|
| 330 |
+
print(f" Context Phase: {m.context_phase}")
|
| 331 |
+
print(f" Context Mass: {m.context_mass}")
|
| 332 |
+
|
| 333 |
+
def print_ci(ci: CoevolutionIndex):
|
| 334 |
+
tier_color = Colors.GREEN if ci.tier == "HIGH" else Colors.YELLOW if ci.tier == "ELEVATED" else Colors.RED
|
| 335 |
+
print(f"\n{Colors.BOLD}──────────────────────── COEVOLUTION INDEX ─────────────────────────{Colors.ENDC}")
|
| 336 |
+
print(f" CI: {tier_color}{ci.coevolution_index:.3f} [{ci.tier}]{Colors.ENDC}")
|
| 337 |
+
print(f" Horizontal Score: {ci.horizontal_score:.3f}")
|
| 338 |
+
print(f" Vertical Score: {ci.vertical_score:.3f} (Limited in Prod)")
|
| 339 |
+
print(f"{Colors.BOLD}────────────────────────────────────────────────────────────────────{Colors.ENDC}")
|
| 340 |
+
|
| 341 |
+
# =============================================================================
|
| 342 |
+
# MAIN LOOP
|
| 343 |
+
# =============================================================================
|
| 344 |
+
|
| 345 |
+
def run_session(client: SemanticTelemetryClient, turns: int, hosted_ai: bool):
|
| 346 |
+
print_header()
|
| 347 |
+
print(f"Session ID: {client.conversation_id}")
|
| 348 |
+
print(f"Target: {client.base_url}")
|
| 349 |
+
print(f"Mode: {'Hosted AI' if hosted_ai else 'Manual Entry'}")
|
| 350 |
+
print("\nType 'quit' to exit.\n")
|
| 351 |
+
|
| 352 |
+
tracker = CoevolutionTracker()
|
| 353 |
+
|
| 354 |
+
for i in range(turns):
|
| 355 |
+
print(f"\n{Colors.BOLD}--- Turn {i+1}/{turns} ---{Colors.ENDC}")
|
| 356 |
+
|
| 357 |
+
# User Input
|
| 358 |
+
try:
|
| 359 |
+
user_text = input(f"{Colors.GREEN}[YOU]:{Colors.ENDC} ").strip()
|
| 360 |
+
except (KeyboardInterrupt, EOFError):
|
| 361 |
+
break
|
| 362 |
+
|
| 363 |
+
if user_text.lower() in ('quit', 'exit'):
|
| 364 |
+
break
|
| 365 |
+
if not user_text:
|
| 366 |
+
continue
|
| 367 |
+
|
| 368 |
+
# Process Turn
|
| 369 |
+
if hosted_ai:
|
| 370 |
+
print(f"{Colors.CYAN} ... generating response ...{Colors.ENDC}")
|
| 371 |
+
client.messages.append({"role": "user", "content": user_text})
|
| 372 |
+
|
| 373 |
+
data = client.hosted_chat(user_text)
|
| 374 |
+
if not data: continue
|
| 375 |
+
|
| 376 |
+
reply = data.get("reply", "")
|
| 377 |
+
sdk_data = data.get("sdk", {})
|
| 378 |
+
quota = data.get("quota") if isinstance(data, dict) else None
|
| 379 |
+
turns_left = None
|
| 380 |
+
try:
|
| 381 |
+
if isinstance(quota, dict) and isinstance(quota.get("remaining"), int):
|
| 382 |
+
turns_left = max(0, int(quota["remaining"]) // 2)
|
| 383 |
+
except Exception:
|
| 384 |
+
turns_left = None
|
| 385 |
+
client.messages.append({"role": "assistant", "content": reply})
|
| 386 |
+
|
| 387 |
+
print(f"{Colors.BLUE}[AI]:{Colors.ENDC} {reply}\n")
|
| 388 |
+
|
| 389 |
+
metrics = client.extract_metrics(sdk_data)
|
| 390 |
+
print_metrics(metrics)
|
| 391 |
+
tracker.add_turn(metrics)
|
| 392 |
+
print_ci(tracker.compute_index())
|
| 393 |
+
if turns_left is not None:
|
| 394 |
+
print(f"{Colors.CYAN} Turns left this week: {turns_left}{Colors.ENDC}")
|
| 395 |
+
|
| 396 |
+
else:
|
| 397 |
+
# Manual Mode
|
| 398 |
+
print(f"{Colors.CYAN} ... ingesting ...{Colors.ENDC}")
|
| 399 |
+
resp = client.ingest_message("user", user_text)
|
| 400 |
+
if resp:
|
| 401 |
+
m = client.extract_metrics(resp)
|
| 402 |
+
sgi_str = f"{m.sgi:.2f}" if m.sgi is not None else "N/A"
|
| 403 |
+
print(f" [SDK] User turn ingested (SGI={sgi_str})")
|
| 404 |
+
|
| 405 |
+
try:
|
| 406 |
+
ai_text = input(f"{Colors.BLUE}[AI]:{Colors.ENDC} ").strip()
|
| 407 |
+
except (KeyboardInterrupt, EOFError):
|
| 408 |
+
break
|
| 409 |
+
|
| 410 |
+
if not ai_text: ai_text = "(no response)"
|
| 411 |
+
|
| 412 |
+
print(f"{Colors.CYAN} ... ingesting ...{Colors.ENDC}")
|
| 413 |
+
resp = client.ingest_message("assistant", ai_text)
|
| 414 |
+
if resp:
|
| 415 |
+
metrics = client.extract_metrics(resp)
|
| 416 |
+
print_metrics(metrics)
|
| 417 |
+
tracker.add_turn(metrics)
|
| 418 |
+
print_ci(tracker.compute_index())
|
| 419 |
+
|
| 420 |
+
# =============================================================================
|
| 421 |
+
# CLI ENTRY POINT
|
| 422 |
+
# =============================================================================
|
| 423 |
+
|
| 424 |
+
if __name__ == "__main__":
|
| 425 |
+
parser = argparse.ArgumentParser(description="AICoevolution Semantic Telemetry (Production)")
|
| 426 |
+
parser.add_argument("--api-key", required=True, help="Your AICoevolution API Key")
|
| 427 |
+
parser.add_argument("--url", help=f"Custom SDK URL (default: {DEFAULT_SDK_URL})")
|
| 428 |
+
parser.add_argument("--hosted-ai", action="store_true", help="Use Hosted AI for responses")
|
| 429 |
+
parser.add_argument("--turns", type=int, default=10, help="Number of turns")
|
| 430 |
+
parser.add_argument("--no-color", action="store_true", help="Disable colored output")
|
| 431 |
+
|
| 432 |
+
args = parser.parse_args()
|
| 433 |
+
_init_console_colors(enable=not args.no_color)
|
| 434 |
+
|
| 435 |
+
client = SemanticTelemetryClient(args.api_key, base_url=(args.url or DEFAULT_SDK_URL))
|
| 436 |
+
run_session(client, args.turns, args.hosted_ai)
|
| 437 |
+
|
references.bib
ADDED
|
@@ -0,0 +1,237 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
% References for Paper 03: Semantic Orbital Mechanics
|
| 2 |
+
|
| 3 |
+
% ============================================
|
| 4 |
+
% OUR PRIOR WORK
|
| 5 |
+
% ============================================
|
| 6 |
+
|
| 7 |
+
@article{jimenez2025s64,
|
| 8 |
+
author = {Jimenez Sanchez, Juan Jacobo},
|
| 9 |
+
title = {S64: A Symbolic Framework for Human-AI Meaning Negotiation},
|
| 10 |
+
journal = {SSRN Working Paper},
|
| 11 |
+
year = {2025},
|
| 12 |
+
month = {November},
|
| 13 |
+
doi = {10.2139/ssrn.5895302},
|
| 14 |
+
url = {https://dx.doi.org/10.2139/ssrn.5895302},
|
| 15 |
+
note = {Paper 01. Working paper.}
|
| 16 |
+
}
|
| 17 |
+
|
| 18 |
+
@article{jimenez2025coherence,
|
| 19 |
+
author = {Jimenez Sanchez, Juan Jacobo},
|
| 20 |
+
title = {The Conversational Coherence Region: Geometry of Symbolic Meaning Across Embedding Models},
|
| 21 |
+
journal = {Zenodo},
|
| 22 |
+
year = {2025},
|
| 23 |
+
month = {December},
|
| 24 |
+
doi = {10.5281/zenodo.18149380},
|
| 25 |
+
url = {https://doi.org/10.5281/zenodo.18149380},
|
| 26 |
+
note = {Paper 02.}
|
| 27 |
+
}
|
| 28 |
+
|
| 29 |
+
% ============================================
|
| 30 |
+
% SGI & GEOMETRIC HALLUCINATION DETECTION (MARÍN)
|
| 31 |
+
% ============================================
|
| 32 |
+
|
| 33 |
+
@article{marin2025geometric,
|
| 34 |
+
author = {Mar{\'\i}n, Javier},
|
| 35 |
+
title = {Geometric Hallucination Detection: Displacement Consistency and Semantic Grounding in RAG Systems},
|
| 36 |
+
journal = {arXiv preprint arXiv:2512.13771},
|
| 37 |
+
year = {2025},
|
| 38 |
+
month = {December},
|
| 39 |
+
note = {Introduces SGI (Semantic Grounding Index) and Displacement Consistency for hallucination detection}
|
| 40 |
+
}
|
| 41 |
+
|
| 42 |
+
% ============================================
|
| 43 |
+
% DAVIS - SEMANTIC FIELD EQUATIONS
|
| 44 |
+
% ============================================
|
| 45 |
+
|
| 46 |
+
@article{davis2025field,
|
| 47 |
+
author = {Davis, Bee Rosa},
|
| 48 |
+
title = {Field Equations of Semantic Coherence: Curvature and Inference in Transformer Cognition},
|
| 49 |
+
journal = {Zenodo},
|
| 50 |
+
year = {2025},
|
| 51 |
+
doi = {10.5281/zenodo.17771796},
|
| 52 |
+
url = {https://doi.org/10.5281/zenodo.17771796},
|
| 53 |
+
note = {Working paper. Geometric field theory of semantic coherence.}
|
| 54 |
+
}
|
| 55 |
+
|
| 56 |
+
@article{davis2025spectral,
|
| 57 |
+
author = {Davis, Bee Rosa},
|
| 58 |
+
title = {Spectral Geometry of Transformer Cognition: Heat Kernel Analysis Reveals Functional Organization in Language Models},
|
| 59 |
+
journal = {Zenodo},
|
| 60 |
+
year = {2025},
|
| 61 |
+
doi = {10.5281/zenodo.17783723},
|
| 62 |
+
url = {https://doi.org/10.5281/zenodo.17783723},
|
| 63 |
+
note = {Preprint. Spectral/heat-kernel geometry for transformer interpretability.}
|
| 64 |
+
}
|
| 65 |
+
|
| 66 |
+
% ============================================
|
| 67 |
+
% BACH & SORENSEN - MACHINE CONSCIOUSNESS
|
| 68 |
+
% ============================================
|
| 69 |
+
|
| 70 |
+
@misc{bach2025machine,
|
| 71 |
+
author = {Bach, Joscha and Sorensen, Hikari},
|
| 72 |
+
title = {The Machine Consciousness Hypothesis},
|
| 73 |
+
year = {2025},
|
| 74 |
+
howpublished = {Essay (California Institute for Machine Consciousness)},
|
| 75 |
+
url = {https://cimc.ai/cimcHypothesis.pdf},
|
| 76 |
+
note = {Essay. Develops a coherence-maximization account and computationalist-functional framing.}
|
| 77 |
+
}
|
| 78 |
+
|
| 79 |
+
% ============================================
|
| 80 |
+
% ALIGNMENT & RLHF
|
| 81 |
+
% ============================================
|
| 82 |
+
|
| 83 |
+
@article{christiano2017deep,
|
| 84 |
+
author = {Christiano, Paul F. and Leike, Jan and Brown, Tom and Martic, Miljan and Legg, Shane and Amodei, Dario},
|
| 85 |
+
title = {Deep Reinforcement Learning from Human Preferences},
|
| 86 |
+
journal = {Advances in Neural Information Processing Systems},
|
| 87 |
+
volume = {30},
|
| 88 |
+
year = {2017}
|
| 89 |
+
}
|
| 90 |
+
|
| 91 |
+
@article{ouyang2022training,
|
| 92 |
+
author = {Ouyang, Long and Wu, Jeff and Jiang, Xu and Almeida, Diogo and Wainwright, Carroll and Mishkin, Pamela and Zhang, Chong and Agarwal, Sandhini and Slama, Katarina and Ray, Alex and others},
|
| 93 |
+
title = {Training language models to follow instructions with human feedback},
|
| 94 |
+
journal = {Advances in Neural Information Processing Systems},
|
| 95 |
+
volume = {35},
|
| 96 |
+
pages = {27730--27744},
|
| 97 |
+
year = {2022}
|
| 98 |
+
}
|
| 99 |
+
|
| 100 |
+
@article{bai2022constitutional,
|
| 101 |
+
author = {Bai, Yuntao and Kadavath, Saurav and Kundu, Sandipan and Askell, Amanda and Kernion, Jackson and Jones, Andy and Chen, Anna and Goldie, Anna and Mirhoseini, Azalia and McKinnon, Cameron and others},
|
| 102 |
+
title = {Constitutional AI: Harmlessness from AI Feedback},
|
| 103 |
+
journal = {arXiv preprint arXiv:2212.08073},
|
| 104 |
+
year = {2022}
|
| 105 |
+
}
|
| 106 |
+
|
| 107 |
+
@article{askell2021general,
|
| 108 |
+
author = {Askell, Amanda and Bai, Yuntao and Chen, Anna and Drain, Dawn and Ganguli, Deep and Henighan, Tom and Jones, Andy and Joseph, Nicholas and Mann, Ben and DasSarma, Nova and others},
|
| 109 |
+
title = {A General Language Assistant as a Laboratory for Alignment},
|
| 110 |
+
journal = {arXiv preprint arXiv:2112.00861},
|
| 111 |
+
year = {2021}
|
| 112 |
+
}
|
| 113 |
+
|
| 114 |
+
% ============================================
|
| 115 |
+
% HALLUCINATION DETECTION
|
| 116 |
+
% ============================================
|
| 117 |
+
|
| 118 |
+
@article{ji2023survey,
|
| 119 |
+
author = {Ji, Ziwei and Lee, Nayeon and Frieske, Rita and Yu, Tiezheng and Su, Dan and Xu, Yan and Ishii, Etsuko and Bang, Ye Jin and Madotto, Andrea and Fung, Pascale},
|
| 120 |
+
title = {Survey of Hallucination in Natural Language Generation},
|
| 121 |
+
journal = {ACM Computing Surveys},
|
| 122 |
+
volume = {55},
|
| 123 |
+
number = {12},
|
| 124 |
+
pages = {1--38},
|
| 125 |
+
year = {2023}
|
| 126 |
+
}
|
| 127 |
+
|
| 128 |
+
@article{huang2023survey,
|
| 129 |
+
author = {Huang, Lei and Yu, Weijiang and Ma, Weitao and Zhong, Weihong and Feng, Zhangyin and Wang, Haotian and Chen, Qianglong and Peng, Weihua and Feng, Xiaocheng and Qin, Bing and Liu, Ting},
|
| 130 |
+
title = {A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions},
|
| 131 |
+
journal = {arXiv preprint arXiv:2311.05232},
|
| 132 |
+
year = {2023}
|
| 133 |
+
}
|
| 134 |
+
|
| 135 |
+
% ============================================
|
| 136 |
+
% EMBEDDING GEOMETRY
|
| 137 |
+
% ============================================
|
| 138 |
+
|
| 139 |
+
@inproceedings{ethayarajh2019contextual,
|
| 140 |
+
author = {Ethayarajh, Kawin},
|
| 141 |
+
title = {How Contextual are Contextualized Word Representations? Comparing the Geometry of BERT, ELMo, and GPT-2 Embeddings},
|
| 142 |
+
booktitle = {Proceedings of EMNLP-IJCNLP},
|
| 143 |
+
year = {2019},
|
| 144 |
+
pages = {55--65}
|
| 145 |
+
}
|
| 146 |
+
|
| 147 |
+
@inproceedings{mu2018allbuttop,
|
| 148 |
+
author = {Mu, Jiaqi and Bhat, Suma and Viswanath, Pramod},
|
| 149 |
+
title = {All-but-the-Top: Simple and Effective Postprocessing for Word Representations},
|
| 150 |
+
booktitle = {Proceedings of ICLR},
|
| 151 |
+
year = {2018}
|
| 152 |
+
}
|
| 153 |
+
|
| 154 |
+
% ============================================
|
| 155 |
+
% DIALOGUE & CONVERSATION
|
| 156 |
+
% ============================================
|
| 157 |
+
|
| 158 |
+
@article{li2016deep,
|
| 159 |
+
author = {Li, Jiwei and Monroe, Will and Ritter, Alan and Jurafsky, Dan and Galley, Michel and Gao, Jianfeng},
|
| 160 |
+
title = {Deep Reinforcement Learning for Dialogue Generation},
|
| 161 |
+
journal = {Proceedings of EMNLP},
|
| 162 |
+
year = {2016},
|
| 163 |
+
pages = {1192--1202}
|
| 164 |
+
}
|
| 165 |
+
|
| 166 |
+
@article{see2019makes,
|
| 167 |
+
author = {See, Abigail and Roller, Stephen and Kiela, Douwe and Weston, Jason},
|
| 168 |
+
title = {What Makes a Good Conversation? How Controllable Attributes Affect Human Judgments},
|
| 169 |
+
journal = {Proceedings of NAACL-HLT},
|
| 170 |
+
year = {2019},
|
| 171 |
+
pages = {1702--1723}
|
| 172 |
+
}
|
| 173 |
+
|
| 174 |
+
% ============================================
|
| 175 |
+
% HOFSTADTER - STRANGE LOOPS
|
| 176 |
+
% ============================================
|
| 177 |
+
|
| 178 |
+
@book{hofstadter2007strange,
|
| 179 |
+
author = {Hofstadter, Douglas R.},
|
| 180 |
+
title = {I Am a Strange Loop},
|
| 181 |
+
publisher = {Basic Books},
|
| 182 |
+
year = {2007},
|
| 183 |
+
address = {New York, NY},
|
| 184 |
+
note = {Develops the theory that consciousness arises from self-referential "strange loops" in cognitive systems}
|
| 185 |
+
}
|
| 186 |
+
|
| 187 |
+
@book{hofstadter1979godel,
|
| 188 |
+
author = {Hofstadter, Douglas R.},
|
| 189 |
+
title = {Gödel, Escher, Bach: An Eternal Golden Braid},
|
| 190 |
+
publisher = {Basic Books},
|
| 191 |
+
year = {1979},
|
| 192 |
+
address = {New York, NY},
|
| 193 |
+
note = {Foundational work on self-reference, recursion, and the emergence of meaning from formal systems}
|
| 194 |
+
}
|
| 195 |
+
|
| 196 |
+
% ============================================
|
| 197 |
+
% FOUNDATIONAL WORKS
|
| 198 |
+
% ============================================
|
| 199 |
+
|
| 200 |
+
@book{Mead1934,
|
| 201 |
+
author = {Mead, George Herbert},
|
| 202 |
+
title = {Mind, Self, and Society},
|
| 203 |
+
publisher = {University of Chicago Press},
|
| 204 |
+
year = {1934},
|
| 205 |
+
address = {Chicago, IL}
|
| 206 |
+
}
|
| 207 |
+
|
| 208 |
+
@book{Blumer1969,
|
| 209 |
+
author = {Blumer, Herbert},
|
| 210 |
+
title = {Symbolic Interactionism: Perspective and Method},
|
| 211 |
+
publisher = {University of California Press},
|
| 212 |
+
year = {1969},
|
| 213 |
+
address = {Berkeley, CA}
|
| 214 |
+
}
|
| 215 |
+
|
| 216 |
+
@book{Varela1991,
|
| 217 |
+
author = {Varela, Francisco J. and Thompson, Evan and Rosch, Eleanor},
|
| 218 |
+
title = {The Embodied Mind: Cognitive Science and Human Experience},
|
| 219 |
+
publisher = {MIT Press},
|
| 220 |
+
year = {1991},
|
| 221 |
+
address = {Cambridge, MA}
|
| 222 |
+
}
|
| 223 |
+
|
| 224 |
+
% ============================================
|
| 225 |
+
% INFORMATION THEORY
|
| 226 |
+
% ============================================
|
| 227 |
+
|
| 228 |
+
@article{shannon1948mathematical,
|
| 229 |
+
author = {Shannon, Claude E.},
|
| 230 |
+
title = {A Mathematical Theory of Communication},
|
| 231 |
+
journal = {The Bell System Technical Journal},
|
| 232 |
+
volume = {27},
|
| 233 |
+
number = {3},
|
| 234 |
+
pages = {379--423},
|
| 235 |
+
year = {1948}
|
| 236 |
+
}
|
| 237 |
+
|
render.bat
ADDED
|
@@ -0,0 +1,103 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
@echo off
|
| 2 |
+
setlocal EnableExtensions EnableDelayedExpansion
|
| 3 |
+
REM Render Paper 03: Semantic Orbital Mechanics
|
| 4 |
+
REM Usage: render.bat [format]
|
| 5 |
+
REM format: pdf (default), html, typst, all
|
| 6 |
+
|
| 7 |
+
set FORMAT=%1
|
| 8 |
+
REM Default to "all" so PDF + HTML + deploy happens in one run
|
| 9 |
+
if "%FORMAT%"=="" set FORMAT=all
|
| 10 |
+
|
| 11 |
+
echo ============================================================
|
| 12 |
+
echo PAPER 03: Semantic Orbital Mechanics
|
| 13 |
+
echo Format: %FORMAT%
|
| 14 |
+
echo ============================================================
|
| 15 |
+
echo.
|
| 16 |
+
|
| 17 |
+
cd /d "%~dp0"
|
| 18 |
+
|
| 19 |
+
REM Clear Quarto cache to ensure fresh figures
|
| 20 |
+
echo [1/4] Clearing cache...
|
| 21 |
+
if exist ".quarto" rmdir /s /q ".quarto"
|
| 22 |
+
if exist "_freeze" rmdir /s /q "_freeze"
|
| 23 |
+
if exist "S64-orbital-paper_files" rmdir /s /q "S64-orbital-paper_files"
|
| 24 |
+
echo.
|
| 25 |
+
|
| 26 |
+
REM Handle "all" format - render both PDF and HTML
|
| 27 |
+
if /I "%FORMAT%"=="all" (
|
| 28 |
+
echo [2/4] Rendering PDF...
|
| 29 |
+
quarto render S64-orbital-paper.md --to pdf
|
| 30 |
+
if !ERRORLEVEL! NEQ 0 goto :error
|
| 31 |
+
|
| 32 |
+
echo.
|
| 33 |
+
echo [3/4] Rendering HTML...
|
| 34 |
+
quarto render S64-orbital-paper.md --to html
|
| 35 |
+
if !ERRORLEVEL! NEQ 0 goto :error
|
| 36 |
+
|
| 37 |
+
goto :deploy_html
|
| 38 |
+
)
|
| 39 |
+
|
| 40 |
+
echo [2/4] Rendering to %FORMAT%...
|
| 41 |
+
quarto render S64-orbital-paper.md --to %FORMAT%
|
| 42 |
+
|
| 43 |
+
if !ERRORLEVEL! NEQ 0 goto :error
|
| 44 |
+
|
| 45 |
+
echo.
|
| 46 |
+
echo [OK] Render complete! Output in _output/
|
| 47 |
+
|
| 48 |
+
if "%FORMAT%"=="pdf" (
|
| 49 |
+
start "" "_output\S64-orbital-paper.pdf"
|
| 50 |
+
goto :done
|
| 51 |
+
)
|
| 52 |
+
|
| 53 |
+
:deploy_html
|
| 54 |
+
REM If rendering HTML (or all), copy outputs into the website public folder
|
| 55 |
+
if /I "%FORMAT%"=="html" goto :do_deploy
|
| 56 |
+
if /I "%FORMAT%"=="all" goto :do_deploy
|
| 57 |
+
goto :done
|
| 58 |
+
|
| 59 |
+
:do_deploy
|
| 60 |
+
echo.
|
| 61 |
+
echo [4/4] Deploying to website...
|
| 62 |
+
|
| 63 |
+
REM Resolve absolute destination path
|
| 64 |
+
for %%I in ("%~dp0..\..\..\apps\website\public\_output") do set "DEST=%%~fI"
|
| 65 |
+
if not exist "!DEST!" mkdir "!DEST!"
|
| 66 |
+
|
| 67 |
+
REM Copy main HTML file
|
| 68 |
+
copy /Y "_output\S64-orbital-paper.html" "!DEST!\S64-orbital-paper.html" >nul
|
| 69 |
+
echo - Copied S64-orbital-paper.html
|
| 70 |
+
|
| 71 |
+
REM Copy Quarto assets
|
| 72 |
+
if exist "_output\S64-orbital-paper_files" (
|
| 73 |
+
if exist "!DEST!\S64-orbital-paper_files" rmdir /s /q "!DEST!\S64-orbital-paper_files"
|
| 74 |
+
xcopy "_output\S64-orbital-paper_files" "!DEST!\S64-orbital-paper_files" /e /i /q >nul
|
| 75 |
+
echo - Copied S64-orbital-paper_files/
|
| 76 |
+
)
|
| 77 |
+
|
| 78 |
+
REM Copy figures with P03_ prefix (Paper 03 namespace)
|
| 79 |
+
if exist "_output\figures" (
|
| 80 |
+
if not exist "!DEST!\figures" mkdir "!DEST!\figures"
|
| 81 |
+
xcopy "_output\figures\P03_*.*" "!DEST!\figures\" /y /q >nul
|
| 82 |
+
echo - Copied figures/P03_*
|
| 83 |
+
)
|
| 84 |
+
|
| 85 |
+
echo.
|
| 86 |
+
echo [OK] Deployed to: !DEST!
|
| 87 |
+
echo Website path: /_output/S64-orbital-paper.html
|
| 88 |
+
goto :done
|
| 89 |
+
|
| 90 |
+
:error
|
| 91 |
+
echo.
|
| 92 |
+
echo [ERROR] Render failed. Check errors above.
|
| 93 |
+
goto :end
|
| 94 |
+
|
| 95 |
+
:done
|
| 96 |
+
echo.
|
| 97 |
+
echo ============================================================
|
| 98 |
+
echo COMPLETE
|
| 99 |
+
echo ============================================================
|
| 100 |
+
|
| 101 |
+
:end
|
| 102 |
+
pause
|
| 103 |
+
|
title-block.tex
ADDED
|
@@ -0,0 +1,43 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
% Custom title block for Paper 03: Semantic Orbital Mechanics
|
| 2 |
+
|
| 3 |
+
\begingroup
|
| 4 |
+
\setlength{\parindent}{0pt}
|
| 5 |
+
\begin{center}
|
| 6 |
+
{\LARGE\bfseries Semantic Orbital Mechanics:\\Measuring and Guiding AI Conversation Dynamics\par}
|
| 7 |
+
\vspace{1.5em}
|
| 8 |
+
{\large Juan Jacobo Jimenez Sanchez\par}
|
| 9 |
+
\vspace{0.3em}
|
| 10 |
+
{\normalsize Aicoevolution Ltd\par}
|
| 11 |
+
{\normalsize Auckland, New Zealand\par}
|
| 12 |
+
{\normalsize \texttt{research@aicoevolution.com}\par}
|
| 13 |
+
\vspace{1em}
|
| 14 |
+
{\normalsize January 2026\par}
|
| 15 |
+
\end{center}
|
| 16 |
+
\endgroup
|
| 17 |
+
|
| 18 |
+
\vspace{1.5em}
|
| 19 |
+
|
| 20 |
+
\noindent\textbf{Abstract}
|
| 21 |
+
|
| 22 |
+
\vspace{0.5em}
|
| 23 |
+
|
| 24 |
+
\noindent Current alignment techniques focus largely on response optimization---ensuring individual model outputs match human preferences. However, alignment is fundamentally a dynamical process: meaning emerges not from isolated tokens but from the trajectory of interaction over time. In this paper, we introduce a physics-inspired framework for measuring and guiding these dynamics, treating conversation as an orbital system in high-dimensional semantic space.
|
| 25 |
+
|
| 26 |
+
\vspace{0.5em}
|
| 27 |
+
|
| 28 |
+
\noindent Building on the geometric verification of the S64 symbolic framework (Paper 02), we adopt the Semantic Grounding Index (SGI) from Mar\'in's geometric hallucination detection work and reinterpret it as an orbital radius that measures the tension between local responsiveness (query gravity) and global context (history gravity). We define the Conversational Coherence Region as a stable orbit where exploration and grounding are balanced. We then introduce the Semantic Transducer---a telemetry system that decomposes embedding trajectories into actionable S64 signals: symbols, paths, and transformation phases.
|
| 29 |
+
|
| 30 |
+
\vspace{0.5em}
|
| 31 |
+
|
| 32 |
+
\noindent To validate this framework, we conduct a controlled steering experiment. By injecting fake telemetry metrics into an AI's system prompt, we test whether conversational orbits can be predictably altered. The results are surprising: conversations maintain orbital stability despite one participant's distorted perception. The orbit is robust; steering affects what is discussed but not how meaning moves.
|
| 33 |
+
|
| 34 |
+
\vspace{0.5em}
|
| 35 |
+
|
| 36 |
+
\noindent This finding reveals both the power and limits of orbital dynamics. The transducer provides reliable telemetry---trajectory metrics are invariant across 10 embedding backends. But orbital mechanics describes only the \emph{horizontal} plane of conversation: position and velocity. Detecting semantic manipulation requires the \emph{vertical} dimension---symbolic depth, transformation richness, contribution asymmetry---that reveals not just where meaning is but how deep it goes. This paper reframes the 3-Body Problem of human-AI interaction as a measurable dynamical system, while acknowledging that complete navigation requires the vertical instruments of Paper 04.
|
| 37 |
+
|
| 38 |
+
\vspace{1em}
|
| 39 |
+
|
| 40 |
+
\noindent\textbf{Keywords:} semantic orbital mechanics, conversation dynamics, orbital robustness, AI alignment, S64 framework, high-dimensional geometry, semantic transducer
|
| 41 |
+
|
| 42 |
+
\clearpage
|
| 43 |
+
|