Zandy-Wandy commited on
Commit
8ddd3a9
Β·
verified Β·
1 Parent(s): 1e06fff

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +105 -159
README.md CHANGED
@@ -7,239 +7,185 @@ sdk: static
7
  pinned: true
8
  ---
9
 
10
- # Matrix.Corp πŸš€
11
 
12
- Welcome to **Matrix.Corp** β€” an independent AI research organization pushing the boundaries of what intelligence can be. We build specialized models, novel architectures, and entirely new paradigms for real-world use cases.
13
 
14
- ---
15
-
16
- ## Our Philosophy
17
-
18
- We believe intelligence should be **purpose-built, accessible, and honest**. Every project in our ecosystem is designed from the ground up for a specific domain, user, and hardware target. We focus on novel architectures, hardware-aware optimization, emotional intelligence, and β€” with Vexa β€” a completely new computational paradigm that goes beyond AI entirely.
19
 
20
  ---
21
 
22
- ## Projects & Model Families
23
 
24
- ### πŸ”· Vexa β€” Crystalline Intelligence Substrate
25
- *NOT AN AI MODEL β€” A new computational paradigm*
26
- *Runs on any laptop Β· Fully open source Β· Lume language*
27
- **Status: πŸ”΄ Spec complete β€” Build in progress**
 
 
 
 
28
 
29
- > **Vexa is not a neural network. It is a Crystalline Intelligence Substrate** β€” a living, self-updating lattice of Glyphs that crystallizes knowledge in 10 minutes on any laptop, learns from the live web in real time, and runs on Ollama. No gradient descent. No GPU cluster. No frozen knowledge.
30
 
31
- **Core components:**
32
- - πŸ”· **Glyph Lattice** β€” structured meaning objects, not weights. Every Glyph has identity, typed relations, confidence, source references, and a decay function
33
- - ⚑ **Crystallization** β€” replaces training. 5-phase pipeline, 10 min, any 8-core CPU, no GPU needed
34
- - 🌐 **Real-Time Learning** β€” 3 live threads: Web Crystallizer, Interaction Crystallizer, Decay Monitor. Never goes stale
35
- - πŸ’» **Lume** β€” declarative-relational programming language where meaning is a first-class citizen
36
- - πŸ”Œ **Vexa Bridge** β€” Ollama / vLLM / HuggingFace compatible adapter
37
 
38
- | Component | Repo | Status |
39
- |---|---|---|
40
- | Vexa-V1 (Max density, ~10B Glyphs) | Matrix-Corp/Vexa-V1 | πŸ”΄ Planned |
41
- | Vexa-Micro-V1 (Laptop, ~10M Glyphs, 4GB RAM) | Matrix-Corp/Vexa-Micro-V1 | πŸ”΄ Planned |
42
- | Lume Language Spec + Parser | Matrix-Corp/Lume-Language-Spec | πŸ”΄ Planned |
43
- | Vexa Bridge (Ollama/vLLM/HF adapter) | Matrix-Corp/Vexa-Bridge | πŸ”΄ Planned |
44
- | Vexa Crystallizer Engine | Matrix-Corp/Vexa-Crystallizer | πŸ”΄ Planned |
45
 
46
- **Glyph Lattice density tiers:**
47
 
48
- | Tier | Glyphs | RAM | Equivalent | Use Case |
49
- |---|---|---|---|---|
50
- | Nano | ~1M | 2GB | ~1B LLM | IoT, edge devices |
51
- | Micro | ~10M | 4GB | ~7B LLM | Laptops, Raspberry Pi |
52
- | Core | ~100M | 8GB | ~13B LLM | Consumer GPU |
53
- | Dense | ~1B | 16GB | ~30B LLM | Workstation |
54
- | Max | ~10B | 40GB | ~70B LLM | A100 / p300a |
55
 
56
- **Collection:** Matrix-Corp/vexa-v1 *(coming soon)*
57
 
58
  ---
59
 
60
- ### ⚑ Kairiq β€” Critical Moment Intelligence Module
61
- *Universal plug-in module Β· Built entirely in Lume Β· Slap-on adapters*
62
- **Status: πŸ–€ SCRAPED AND DELETED**
63
-
64
- > **Kairiq is not a model. It is a universal intelligence amplifier** β€” a plug-in module built entirely in `.lume` files that attaches to any Matrix.Corp model and elevates it to elite benchmark performance. A Kairiq-enhanced 32B competes with a vanilla 70B. Not because it knows more β€” because it deploys what it knows at exactly the right moment, the right depth, and the right priority.
65
-
66
- **The Three KQ Dimensions:**
67
- - ⏱ **Temporal Acuity (T)** β€” sensing rhythm, pacing, momentum. Knowing when a reasoning path is collapsing before it wastes compute
68
- - ⚑ **Tension Sensing (X)** β€” reading pressure gradients. Identifying load-bearing sub-problems where failure cascades everywhere
69
- - πŸ‘‘ **Imperial Hierarchy (H)** β€” the rank and weight of every sub-problem. What overrides what. The actual question beneath the stated question
70
-
71
- **Three-layer pipeline:** Kairiq Gate β†’ Kairiq Router β†’ Base Model β†’ Kairiq Verifier
72
-
73
- **Benchmark targets:**
74
-
75
- | Benchmark | KQ Dimension | Expected Gain |
76
- |---|---|---|
77
- | MMLU | H β€” ranks actual question instantly | +4–6% |
78
- | HumanEval | T β€” kills dead reasoning paths early | +6–9% |
79
- | MATH | X β€” slow-deep on load-bearing steps | +5–8% |
80
- | ARC | H β€” no reasoning overkill on simple problems | +3–5% |
81
- | HellaSwag | T β€” reads narrative momentum correctly | +4–7% |
82
 
83
- **Written entirely in `.lume`. Python `adapter.py` bridge for any existing model. Zero changes to base model required.**
84
 
85
- | Component | Repo | Status |
86
  |---|---|---|
87
- | Kairiq Module V1 | Matrix-Corp/Kairiq-Module-V1 | πŸ”΄ Planned |
88
- | Lume Kairiq Spec | Matrix-Corp/Lume-Kairiq-Spec | πŸ”΄ Planned |
89
- | Zenith-32B-Kairiq-V1 | Matrix-Corp/Zenith-32B-Kairiq-V1 | πŸ”΄ Planned |
90
 
91
- **Collection:** Matrix-Corp/kairiq-v1 *(coming soon)*
92
 
93
  ---
94
 
95
- ### 🌌 Zenith β€” Reasoning & Emotional Intelligence
96
- *Optimized for Tenstorrent Blackhole p300a hardware*
97
 
98
- High-performance reasoning models built for the Tenstorrent p300a accelerator (dual-chip, 32 RISC-V cores, 64GB GDDR6). Ring Attention (32K context), Mixture of Experts, and EQ Engine V1 for emotional intelligence.
99
 
100
- | Model | Parameters | Base Model | Status |
101
  |---|---|---|---|
102
- | [Zenith-7B-V1](https://huggingface.co/Matrix-Corp/Zenith-7b-V1) | 7B | Qwen2.5-Coder-7B | 🟑 Preview |
103
- | [Zenith-28B-V1](https://huggingface.co/Matrix-Corp/Zenith-28b-p300-V1) | 28B | Qwen3.5-27B (Claude Opus 4.6 distilled) | 🟑 Preview |
104
- | [Zenith-32B-V1](https://huggingface.co/Matrix-Corp/Zenith-32b-p300-V1) | 32B | DeepSeek-R1-Distill-Qwen-32B | 🟑 Preview |
105
- | [Zenith-70B-V1](https://huggingface.co/Matrix-Corp/Zenith-70b-p300-V1) | 70B | DeepSeek-R1-Distill-Llama-70B | 🟑 Preview |
106
 
107
- **Key features:** EQ Engine V1 Β· Ring Attention 32K Β· MoE 12 experts top-2 Β· TP=8/PP=4 Β· Ollama + vLLM compatible
108
-
109
- **Collection:** [Zenith V1](https://huggingface.co/collections/Matrix-Corp/zenith-v1)
110
 
111
  ---
112
 
113
- ### πŸ”¬ Vortex Scientific β€” Deep Science Reasoning
114
- *Optimized for Apple Silicon & Nvidia 4060 Β· Built from scratch β€” no base model*
115
-
116
- From-scratch models for scientific reasoning across Physics, Mathematics, Chemistry, Biology, Earth Science, Space Science, and Zoology. Hybrid SSM + attention architecture with a custom 50K science tokenizer and 4 specialized science modules.
117
 
118
- | Model | Parameters | Architecture | Status |
119
- |---|---|---|---|
120
- | [Vortex-7B-V1](https://huggingface.co/Matrix-Corp/Vortex-7b-V1) | 7B | Hybrid SSM + Attention (60% SSM) | 🟑 Preview |
121
- | [Vortex-13B-V1](https://huggingface.co/Matrix-Corp/Vortex-13b-V1) | 13B | Hybrid SSM + Attention (50% SSM) | 🟑 Preview |
122
 
123
- **Key features:** No base model Β· Custom science tokenizer Β· Equation/LaTeX Β· Molecular/Periodic Table modules Β· Laptop-first
 
 
 
 
124
 
125
- **Collection:** [Vortex V1](https://huggingface.co/collections/Matrix-Corp/vortex-v1)
126
 
127
  ---
128
 
129
- ### 🌿 Touch Grass β€” Music AI Assistant
130
- *Ultra lightweight Β· Runs on anything*
131
 
132
- Fine-tuned music assistant for learning instruments, music theory, songwriting, ear training, and music history. Warm, encouraging, beginner-friendly.
133
 
134
- | Model | Parameters | Base Model | Status |
135
- |---|---|---|---|
136
- | [TouchGrass-3B](https://huggingface.co/Matrix-Corp/TouchGrass-3b) | 3B | Qwen3.5-3B-Instruct | 🟑 Preview |
137
- | [TouchGrass-7B](https://huggingface.co/Matrix-Corp/TouchGrass-7b) | 7B | Qwen3.5-7B-Instruct | 🟑 Preview |
138
 
139
- **Key features:** All instruments Β· Tab & chord generation Β· Music theory engine Β· Ear training Β· Songwriting assistant Β· Music EQ adapter
140
 
141
- **Collection:** [Touch Grass](https://huggingface.co/collections/Matrix-Corp/touch-grass)
 
 
 
142
 
143
- ---
144
 
145
- ### 🌐 Matrix Lattice β€” Frontier Agentic MoE
146
- *Inference provider deployment Β· Closed source Β· Long-term roadmap*
147
- **Status: 🩡 Long-Term Roadmap · Spec complete**
148
 
149
- Flagship frontier agentic + multimodal MoE family with 17 custom modules including EQ Engine V2, Multi-Agent Coordination Layer (MACL), Hierarchical Context Compression Engine (HCCE), and Safety Reasoning Module. Designed for inference provider deployment (Novita, Hyperbolic, Together, Fireworks).
150
 
151
- | Model | Total Params | Active Params | Context | Status |
152
- |---|---|---|---|---|
153
- | Lattice-120B | 120B | ~22B | 1M tokens | 🩡 Long-Term Roadmap |
154
- | Lattice-430B | 430B | ~38B | 1M tokens | 🩡 Long-Term Roadmap |
155
- | Lattice-671B | 671B | ~47B | 1M tokens | 🩡 Long-Term Roadmap |
156
 
157
- **Status:** 🩡 Long-Term Roadmap · 🟣 Closed Source
158
 
159
  ---
160
 
161
  ### 🎨 Matrix Voxel β€” 3D Generation
162
- *Flow Matching Β· Triplane Latent Β· A100 40GB*
163
- **Status: πŸ”΄ Planned β€” Architecture complete**
164
 
165
- 3D generation model family sharing a flow-matching DiT backbone (~2.3B) with task-specific decoder heads.
166
 
167
- | Model | Task | Outputs | Status |
168
  |---|---|---|---|
169
- | Voxel Atlas | World/environment generation | .vox, .obj, .usd | πŸ”΄ Planned Β· 🟒 Open |
170
- | Voxel Forge | 3D mesh & asset generation | .obj, .glb, .fbx, .usdz | πŸ”΄ Planned Β· 🟒 Open |
171
- | Voxel Cast | 3D printable generation | .stl, .step, .3mf | πŸ”΄ Planned Β· 🟒 Open |
172
- | Voxel Lens | NeRF / Gaussian Splatting | .ply (3DGS), NeRF weights | πŸ”΄ Planned Β· 🟒 Open |
173
- | Voxel Prime | All-in-one unified | All formats | πŸ”΄ Planned Β· 🟣 Closed |
174
 
175
  ---
176
 
177
- ### βš’οΈ Matrix Axiom β€” Extreme Reasoning Code Intelligence
178
- *Reserved Β· Design in progress*
179
- **Status: πŸ”΄ Reserved β€” Design starting**
180
 
181
- > Axiom is Matrix.Corp's planned extreme reasoning coding model. Hybrid architecture on a proven base with structurally enforced pre-code reasoning, a self-verify + rewrite loop, internal multi-agent trident (Architect Β· Artisan Β· Auditor), and a custom semantic code tokenizer. Targets #1 on HumanEval. Any hardware. Kairiq-enhanced at launch.
182
 
183
- *Full spec coming soon.*
184
 
185
- ---
186
-
187
- ## Status Legend
188
-
189
- | Status | Meaning |
190
- |---|---|
191
- | 🟒 Released | Trained weights available, benchmarks published |
192
- | 🟑 Preview | Architecture published, training in progress or planned |
193
- | πŸ”΄ Planned | Design complete, build not yet started |
194
- | 🩡 Long-Term Roadmap | Vision defined, significant research and compute required |
195
- | 🟣 Closed Source | Weights and training are proprietary |
196
 
197
  ---
198
 
199
- ## Hardware Philosophy
200
-
201
- We build for **accessible, affordable hardware** β€” not just cloud GPUs:
202
 
203
- | Series | Target Hardware |
204
- |---|---|
205
- | Vexa | Any 8-core laptop β€” CPU only |
206
- | Kairiq | Any β€” universal plug-in |
207
- | Zenith | Tenstorrent Blackhole p300a |
208
- | Vortex | MacBook M2/M3 + Nvidia 4060 laptop |
209
- | Touch Grass | Any hardware |
210
- | Axiom | Any / multi-target |
211
- | Matrix Lattice | 16–32Γ— H100 / p300a |
212
- | Matrix Voxel | A100 40GB |
213
 
214
  ---
215
 
216
- ## Reserved Project Names
217
 
218
- - **Vexa** β†’ Crystalline Intelligence Substrate
219
- - **Kairiq** β†’ Critical Moment Intelligence Module
220
- - **Axiom** β†’ Extreme Reasoning Code Intelligence
 
 
221
 
222
  ---
223
 
224
- ## Organization
225
 
226
- **Matrix.Corp** is an independent AI research organization.
227
 
228
- - πŸ‘€ Founded by [Zandy-Wandy](https://huggingface.co/Zandy-Wandy)
229
- - πŸ™ GitHub: [zapgaming](https://github.com/zapgaming)
230
- - πŸ€— HuggingFace: [Matrix-Corp](https://huggingface.co/Matrix-Corp)
 
 
 
231
 
232
  ---
233
 
234
- ## Collections
235
 
236
- - [Zenith V1](https://huggingface.co/collections/Matrix-Corp/zenith-v1)
237
- - [Vortex V1](https://huggingface.co/collections/Matrix-Corp/vortex-v1)
238
- - [Touch Grass](https://huggingface.co/collections/Matrix-Corp/touch-grass)
239
- - Matrix-Corp/vexa-v1 *(coming soon)*
240
- - Matrix-Corp/kairiq-v1 *(coming soon)*
241
- - Matrix-Corp/voxel-v1 *(coming soon)*
 
 
 
 
242
 
243
  ---
244
 
245
- *Building the future of intelligence β€” one paradigm at a time.* πŸš€
 
7
  pinned: true
8
  ---
9
 
10
+ # Matrix.Corp
11
 
12
+ Independent AI research organization building specialized models, frontier agentic systems, and new intelligence paradigms.
13
 
14
+ **HuggingFace:** [Matrix-Corp](https://huggingface.co/Matrix-Corp) Β· **Founded by:** [Zandy-Wandy](https://huggingface.co/Zandy-Wandy) Β· **GitHub:** [zapgaming](https://github.com/zapgaming)
 
 
 
 
15
 
16
  ---
17
 
18
+ ## Status Legend
19
 
20
+ | Badge | Meaning |
21
+ |---|---|
22
+ | 🟒 Released | Weights available, ready to use |
23
+ | 🟑 Preview | Architecture published, training planned |
24
+ | πŸ”΄ Planned | Design complete, not yet built |
25
+ | 🩡 Long-Term | Vision defined, major research ahead |
26
+ | 🟣 Closed | Proprietary weights |
27
+ | ⬛ Deprecated | Cancelled or superseded |
28
 
29
+ ---
30
 
31
+ ## Models
 
 
 
 
 
32
 
33
+ ### 🌌 Zenith β€” Reasoning + Emotional Intelligence
34
+ **Status:** 🟑 Preview · **Target:** Tenstorrent Blackhole p300a
 
 
 
 
 
35
 
36
+ Transformer models with a built-in EQ Engine β€” a dedicated emotional intelligence layer that sits alongside the reasoning stack. Ring Attention (32K), MoE (12 experts top-2), Ollama + vLLM compatible.
37
 
38
+ | Model | Params | Base | Link |
39
+ |---|---|---|---|
40
+ | Zenith-7B-V1 | 7B | Qwen2.5-Coder-7B | [β†’](https://huggingface.co/Matrix-Corp/Zenith-7b-V1) |
41
+ | Zenith-28B-V1 | 28B | Qwen3.5-27B (Opus 4.6 distilled) | [β†’](https://huggingface.co/Matrix-Corp/Zenith-28b-p300-V1) |
42
+ | Zenith-32B-V1 | 32B | DeepSeek-R1-Distill-Qwen-32B | [β†’](https://huggingface.co/Matrix-Corp/Zenith-32b-V1-Tenstorrent-Blackhole-p300) |
43
+ | Zenith-70B-V1 | 70B | DeepSeek-R1-Distill-Llama-70B | [β†’](https://huggingface.co/Matrix-Corp/Zenith-70b-V1-Tenstorrent-Blackhole-p300) |
 
44
 
45
+ [View Zenith Collection β†’](https://huggingface.co/collections/Matrix-Corp/zenith-v1)
46
 
47
  ---
48
 
49
+ ### πŸ”¬ Vortex Scientific β€” Deep Science Reasoning
50
+ **Status:** 🟑 Preview · **Target:** MacBook M2/M3 + Nvidia 4060
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51
 
52
+ Built from scratch β€” no base model. Custom 50K science tokenizer. Hybrid SSM+Attention architecture with four domain-specific modules: Equation/LaTeX, Numerical, Citation, and Molecular/Periodic Table.
53
 
54
+ | Model | Params | Link |
55
  |---|---|---|
56
+ | Vortex-7B-V1 | 7B | [β†’](https://huggingface.co/Matrix-Corp/Vortex-7b-V1) |
57
+ | Vortex-13B-V1 | 13B | [β†’](https://huggingface.co/Matrix-Corp/Vortex-13b-V1) |
 
58
 
59
+ [View Vortex Collection β†’](https://huggingface.co/collections/Matrix-Corp/vortex-v1)
60
 
61
  ---
62
 
63
+ ### 🌿 Touch Grass β€” Music AI
64
+ **Status:** 🟑 Preview · **Target:** Any hardware
65
 
66
+ LoRA fine-tune on Qwen3.5 built for musicians. Tab & Chord Module, Music Theory Engine, Ear Training, EQ Adapter (4 emotional modes), Songwriting Module.
67
 
68
+ | Model | Params | Base | Link |
69
  |---|---|---|---|
70
+ | TouchGrass-3B | 3B | Qwen3.5-3B-Instruct | [β†’](https://huggingface.co/Matrix-Corp/TouchGrass-3b) |
71
+ | TouchGrass-7B | 7B | Qwen3.5-7B-Instruct | [β†’](https://huggingface.co/Matrix-Corp/TouchGrass-7b) |
 
 
72
 
73
+ [View Touch Grass Collection β†’](https://huggingface.co/collections/Matrix-Corp/touch-grass)
 
 
74
 
75
  ---
76
 
77
+ ### 🌐 Matrix Lattice β€” Frontier Agentic MoE
78
+ **Status:** 🟒 Released Β· 🟣 Closed Source Β· **Target:** 4–32Γ— H100 / Tenstorrent p300a
 
 
79
 
80
+ **Shipped.** Our largest and most capable system. Frontier-scale mixture-of-experts with 17 custom intelligence modules including: EQ Engine V2, Multi-Agent Coordination Layer (MACL), Hierarchical Context Compression Engine (HCCE), Causal Reasoning Graph, Long-Horizon Task Planner, Confidence Calibration Head, Safety Reasoning Module (SRM), and more. 1M token context across all tiers.
 
 
 
81
 
82
+ | Model | Total Params | Active Params | Experts | Context | Link |
83
+ |---|---|---|---|---|---|
84
+ | Lattice-120B | 120B | ~22B | 64 top-4 | 1M | [β†’](https://huggingface.co/Matrix-Corp/Lattice-120B-V1) |
85
+ | Lattice-430B | 430B | ~38B | 128 top-4 | 1M | [β†’](https://huggingface.co/Matrix-Corp/Lattice-430B-V1) |
86
+ | Lattice-671B | 671B | ~47B | 256 top-4 | 1M | [β†’](https://huggingface.co/Matrix-Corp/Lattice-671B-V1) |
87
 
88
+ [View Lattice Collection β†’](https://huggingface.co/collections/Matrix-Corp/lattice-v1)
89
 
90
  ---
91
 
92
+ ### 🩸 Matrix ECHO β€” Living Error Memory
93
+ **Status:** πŸ”΄ Build In Progress Β· 🟒 Open Source Β· **Language:** Rust
94
 
95
+ **The model that remembers how it was wrong.**
96
 
97
+ ECHO is a 27B coding-focused LLM built on `Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled`, running fully in Rust via HuggingFace `candle`. Every correction it receives crystallizes into a **Scar** β€” a typed, weighted memory object stored in a live petgraph lattice.
 
 
 
98
 
99
+ Before every response, ECHO scans its Scar lattice for similar past mistakes. The more it's corrected, the harder it is to fool. Mistakes are not erased β€” they become assets.
100
 
101
+ **Core loop:**
102
+ ```
103
+ prompt β†’ pre-scan Scar lattice β†’ inject caution context β†’ generate β†’ correction β†’ new Scar forms
104
+ ```
105
 
106
+ **Scar types:** Factual Β· Logical Β· Contextual Β· Hallucination Β· Overconfidence
107
 
108
+ **Domain Weakness Map** β€” ECHO tracks which topics it's systematically weak in and suppresses confidence automatically in high-risk domains.
 
 
109
 
110
+ **OpenAI-compatible API** β€” drop-in via `POST /v1/chat/completions`. Corrections via `POST /v1/echo/correct`.
111
 
112
+ | Model | Params | Base | Language |
113
+ |---|---|---|---|
114
+ | ECHO-27B-V1 | 27B | Qwen3.5-27B (Opus 4.6 distilled) | Rust + candle |
 
 
115
 
116
+ [View ECHO Collection β†’](https://huggingface.co/collections/Matrix-Corp/echo-v1)
117
 
118
  ---
119
 
120
  ### 🎨 Matrix Voxel β€” 3D Generation
121
+ **Status:** πŸ”΄ Planned Β· **Target:** A100 40GB
 
122
 
123
+ Flow-matching DiT backbone (~2.3B) with task-specific decoder heads. Generates 3D meshes, environments, printable models, and NeRF/Gaussian Splatting outputs.
124
 
125
+ | Model | Task | Outputs | License |
126
  |---|---|---|---|
127
+ | Voxel Atlas | World/environment gen | .vox, .obj, .usd | 🟒 Open |
128
+ | Voxel Forge | 3D mesh & assets | .obj, .glb, .fbx, .usdz | 🟒 Open |
129
+ | Voxel Cast | 3D printable | .stl, .step, .3mf | 🟒 Open |
130
+ | Voxel Lens | NeRF / Gaussian Splatting | .ply (3DGS) | 🟒 Open |
131
+ | Voxel Prime | Unified all-in-one | All formats | 🟣 Closed |
132
 
133
  ---
134
 
135
+ ### πŸ”· Matrix Vexa β€” Crystalline Intelligence Substrate
136
+ **Status:** πŸ”΄ Paused Β· 🟒 Open Source
 
137
 
138
+ Vexa is not a model. It is a new intelligence paradigm β€” a living lattice of **Glyphs** (structured meaning objects) that grows through **Crystallization** instead of training. 10 minutes on any CPU. No GPU required. Knowledge never goes stale β€” three background threads continuously update from the web, interactions, and decay.
139
 
140
+ Full paradigm definition and build prompt complete. Build paused, will resume.
141
 
142
+ [View Vexa Collection β†’](https://huggingface.co/collections/Matrix-Corp/vexa-v1)
 
 
 
 
 
 
 
 
 
 
143
 
144
  ---
145
 
146
+ ### ⬛ ~~Kairiq β€” Critical Moment Intelligence Module~~
147
+ **Status:** ⬛ Deprecated
 
148
 
149
+ A Lume-native intelligence amplifier module designed to wrap Matrix models. Deprecated β€” the custom Lume language runtime exceeded practical build complexity. Core ideas (pre-scan, confidence suppression, domain routing) absorbed into ECHO.
 
 
 
 
 
 
 
 
 
150
 
151
  ---
152
 
153
+ ## Paradigms
154
 
155
+ | Name | Type | Status |
156
+ |---|---|---|
157
+ | Crystalline Intelligence (Vexa) | Non-neural knowledge substrate | πŸ”΄ Paused |
158
+ | Living Error Memory (ECHO) | Scar-based mistake crystallization | πŸ”΄ Build In Progress |
159
+ | Ferric Attention | Ownership-typed attention mechanism | 🩡 Research concept |
160
 
161
  ---
162
 
163
+ ## Reserved Names
164
 
165
+ These names are allocated to specific projects. Not available for other uses.
166
 
167
+ | Name | Allocated To |
168
+ |---|---|
169
+ | Vexa | Crystalline Intelligence Substrate |
170
+ | ECHO | Living Error Memory LLM |
171
+ | Axiom | Future extreme reasoning model (planned) |
172
+ | Lume | Declarative-relational language for Vexa |
173
 
174
  ---
175
 
176
+ ## Licensing
177
 
178
+ | Model Family | License |
179
+ |---|---|
180
+ | Zenith | Apache 2.0 |
181
+ | Vortex | Apache 2.0 |
182
+ | Touch Grass | Apache 2.0 |
183
+ | Matrix Lattice | Proprietary |
184
+ | Matrix ECHO | Apache 2.0 |
185
+ | Matrix Voxel (open tiers) | Apache 2.0 |
186
+ | Matrix Voxel Prime | Proprietary |
187
+ | Vexa | Apache 2.0 |
188
 
189
  ---
190
 
191
+ *Matrix.Corp β€” building intelligence that knows its own limits.*