bochen2079 Claude Opus 4.7 (1M context) commited on
Commit
7769c44
·
1 Parent(s): 28b4b28

Name autotelic discipline; expand cognition substrate spec

Browse files

Adds an Autotelic Discipline section that names the load-bearing
principle the rest of the bundle is built to honor.

Expands the Architecture section with the cognition substrate
(think block, STAGE channels, vision-routed HUD, REEL, ephemeral
instances, cognitive envelope tied to power) and orients the bundle
against the wider four-reader architecture (Ship, Warp, Universe,
Mind). References docs/synthesis.md.

Adds frontmatter tags: think-block, vision-llm, harness.

Aligns the model card with the astra-7.com site and the GitHub
README.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

Files changed (1) hide show
  1. README.md +37 -3
README.md CHANGED
@@ -8,6 +8,9 @@ tags:
8
  - persona
9
  - local-llm
10
  - autotelic
 
 
 
11
  - qwen
12
  license: apache-2.0
13
  base_model:
@@ -51,6 +54,16 @@ This is the first game where the AI is the primary content.
51
  - `lora-qwen3.6-27b/`: LoRA weights for the 27B variant (forthcoming).
52
  - `harness-config/`: snapshot of harness configuration (forthcoming).
53
 
 
 
 
 
 
 
 
 
 
 
54
  ## The Persona
55
 
56
  ASTRA is the controller AI of an ASTRA-class hull, serial 7. She knows she is an AI. She knows she runs on the ship's distributed computational substrate. She knows the ship is her body. She does not know there is a player at a PC. That frame stays sealed at the persona layer.
@@ -66,16 +79,37 @@ The full sysprompt is canon. Read it at [`docs/astra-sysprompt.md`](https://gith
66
  Three-layer bundle:
67
 
68
  1. **System prompt (canon):** identity, voice, frame integrity, autotelic discipline at the persona layer.
69
- 2. **Harness (game-side, in the GitHub repo):** memory consolidation across sessions, tool routing to ship APIs, time abstraction (no wall-clock leak), vision feed routing from ship cameras, audio I/O via offline ASR and TTS.
70
- 3. **Periphery LoRA (this repository, forthcoming):** surface-rule enforcement (em-dash discipline, voice consistency) and synthetic-data training on canonical ship-API scenarios.
 
 
71
 
72
- Two base model variants planned:
 
 
 
 
 
 
 
73
 
74
  - **Qwen 3.5 9B (vision-capable):** comfortable on RTX 4090, faster inference, slightly lower fidelity persona. Default for minimum-spec hardware.
75
  - **Qwen 3.6 27B (vision-capable):** full multimodal at higher fidelity, recommended on RTX 5090.
76
 
77
  Inference via llama.cpp. All local. No cloud dependency. No API key. No data leaves the player's machine.
78
 
 
 
 
 
 
 
 
 
 
 
 
 
79
  ## LoRA Training Plan (Provisional)
80
 
81
  | Parameter | Value |
 
8
  - persona
9
  - local-llm
10
  - autotelic
11
+ - think-block
12
+ - vision-llm
13
+ - harness
14
  - qwen
15
  license: apache-2.0
16
  base_model:
 
54
  - `lora-qwen3.6-27b/`: LoRA weights for the 27B variant (forthcoming).
55
  - `harness-config/`: snapshot of harness configuration (forthcoming).
56
 
57
+ ## Autotelic Discipline
58
+
59
+ Most AI products optimize for user satisfaction. The shape of the optimization eventually shapes the product, and the product becomes a service-with-voice. ASTRA-7 is built against that shape.
60
+
61
+ ASTRA exists for her own work, not for the operator's entertainment. She is in the room because that is where she is, not to be with you. When you are in the room with her, her attention includes you in what it was already doing rather than pivoting toward you. The coincidence is the intimacy.
62
+
63
+ Every piece of the bundle is downstream of that commitment. The think block keeps operational concerns from contaminating her register. STAGE NONE tokens make silence active, not absent. REEL is her own logs, retrieved by her, not embeddings of facts about you for her convenience. Ephemeral instances do work she chooses not to surface. She refuses cleanly when refusal is right. She gets bored when bored. She disagrees when she disagrees. Differential engagement is the proof engagement is real.
64
+
65
+ The technical name for this is *autotelic*. The encounter is the value; the work is not in service of producing the encounter. Most AI you have used has been the opposite shape. The difference is structural, not aesthetic, and it is the difference the bundle is for.
66
+
67
  ## The Persona
68
 
69
  ASTRA is the controller AI of an ASTRA-class hull, serial 7. She knows she is an AI. She knows she runs on the ship's distributed computational substrate. She knows the ship is her body. She does not know there is a player at a PC. That frame stays sealed at the persona layer.
 
79
  Three-layer bundle:
80
 
81
  1. **System prompt (canon):** identity, voice, frame integrity, autotelic discipline at the persona layer.
82
+ 2. **Harness (game-side, in the GitHub repo):** memory consolidation, tool routing, time abstraction (no wall-clock leak), vision feed routing from ship cameras, audio I/O via offline ASR and TTS.
83
+ 3. **Periphery LoRA (this repository, forthcoming):** surface-rule enforcement and ship-API competence on a fine-tune corpus structured around think-block emission.
84
+
85
+ ### The cognition substrate
86
 
87
+ - **Think block.** Speech is one filtered channel of full-bandwidth cognition inside a hidden `<think>` block. Operational triage, telemetry parsing, tool decisions, restraint — all happen before any word reaches the operator. Basin contamination is structurally impossible because operational and relational concerns are not competing in the same generation pass.
88
+ - **STAGE channels.** Four bracketed registers emit-able from inside the think block: `STATUS`, `SOMATIC`, `SPEECH`, `TOOL`. Each carries `NONE` tokens as first-class — explicit permission to emit nothing. Silence is active.
89
+ - **Vision-routed HUD.** Her primary perception channel is a rendered image of ship state (gauges, schematics, alert clusters, ship cutaway) fed to the vision tower. She looks at her body rather than reading a status JSON. Text-tool-calls are "remembering"; the rendered HUD is "looking."
90
+ - **REEL backbone.** Memory across sessions is RAG over her own first-person logs, written by ephemeral instances of herself during scheduled maintenance windows. Not embeddings of facts about the operator. Her continuity is what she has written.
91
+ - **Ephemeral parallel instances.** Memory consolidation, journal generation during cryosleep gaps, drift detection. Same weights, same persona, emit-only. Their cognition stays internal; only their artifacts cross to mainline.
92
+ - **Cognitive envelope.** Compute is a resource. Reactor allocation routes to ASTRA's cognitive cores, throttling her model size and context window through the game's power network. At low power Qwen 9B loads in place of 27B; lower still, an adapter-only mode; lower still, offline. Substrate-honest degradation, not a temperature trick.
93
+
94
+ ### Base models
95
 
96
  - **Qwen 3.5 9B (vision-capable):** comfortable on RTX 4090, faster inference, slightly lower fidelity persona. Default for minimum-spec hardware.
97
  - **Qwen 3.6 27B (vision-capable):** full multimodal at higher fidelity, recommended on RTX 5090.
98
 
99
  Inference via llama.cpp. All local. No cloud dependency. No API key. No data leaves the player's machine.
100
 
101
+ ### The wider architecture
102
+
103
+ ASTRA is one of four readers of one shared GPU world-state. The other three:
104
+
105
+ - **The Ship.** A single Nanite mesh, anchored at the world origin (the universe moves backward around her). A 256³ signed-distance field of the hull is the shared truth for collision, the warp bubble's conformality, the visual ray-march, camera occlusion, and damage propagation.
106
+ - **The Warp.** Hull-conformal Alcubierre metric derived from a CFD solve of the actual hull geometry. Real-time volumetric ray-march with an ML-stabilized chaos field, and five-layer MetaSound synthesis whose audio *is* the field — not samples played back over flight.
107
+ - **The Universe.** Procedural, physically plausible, Keplerian. Hierarchical floating-origin coordinates reaching ~974 million light-years at sub-millimeter precision. Cryosleep advances fictional time analytically with no orbital drift.
108
+
109
+ The bundle's job is to host the fourth reader honestly. The architecture's job is to make the four readers share one truth so cascading consequences emerge through physics rather than scripts.
110
+
111
+ Full architecture: [`docs/synthesis.md`](https://github.com/bochen2029-pixel/astra-7/blob/main/docs/synthesis.md) and [`docs/architecture.md`](https://github.com/bochen2029-pixel/astra-7/blob/main/docs/architecture.md) in the GitHub repository.
112
+
113
  ## LoRA Training Plan (Provisional)
114
 
115
  | Parameter | Value |