Zandy-Wandy commited on
Commit
698351f
Β·
verified Β·
1 Parent(s): f5b7a06

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +100 -73
README.md CHANGED
@@ -24,6 +24,7 @@ We believe intelligence should be **purpose-built, accessible, and honest**. Eve
24
  ### πŸ”· Vexa β€” Crystalline Intelligence Substrate
25
  *NOT AN AI MODEL β€” A new computational paradigm*
26
  *Runs on any laptop Β· Fully open source Β· Lume language*
 
27
 
28
  > **Vexa is not a neural network. It is a Crystalline Intelligence Substrate** β€” a living, self-updating lattice of Glyphs that crystallizes knowledge in 10 minutes on any laptop, learns from the live web in real time, and runs on Ollama. No gradient descent. No GPU cluster. No frozen knowledge.
29
 
@@ -42,7 +43,7 @@ We believe intelligence should be **purpose-built, accessible, and honest**. Eve
42
  | Vexa Bridge (Ollama/vLLM/HF adapter) | Matrix-Corp/Vexa-Bridge | πŸ”΄ Planned |
43
  | Vexa Crystallizer Engine | Matrix-Corp/Vexa-Crystallizer | πŸ”΄ Planned |
44
 
45
- **Glyph Lattice density tiers (one architecture, scaled by density):**
46
 
47
  | Tier | Glyphs | RAM | Equivalent | Use Case |
48
  |---|---|---|---|---|
@@ -53,84 +54,99 @@ We believe intelligence should be **purpose-built, accessible, and honest**. Eve
53
  | Max | ~10B | 40GB | ~70B LLM | A100 / p300a |
54
 
55
  **Collection:** Matrix-Corp/vexa-v1 *(coming soon)*
56
- **Status:** πŸ”΄ Planned β€” paradigm definition complete, implementation starting
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57
 
58
  ---
59
 
60
  ### 🌌 Zenith β€” Reasoning & Emotional Intelligence
61
  *Optimized for Tenstorrent Blackhole p300a hardware*
62
 
63
- High-performance reasoning models built for the Tenstorrent p300a accelerator (dual-chip, 32 RISC-V cores, 64GB GDDR6). Each model features Ring Attention (32K context), Mixture of Experts, and our EQ Engine for emotional intelligence.
64
 
65
- | Model | Parameters | Base Model | Status | Use Case |
66
- |---|---|---|---|---|
67
- | [Zenith-7B-V1](https://huggingface.co/Matrix-Corp/Zenith-7b-V1) | 7B | Qwen2.5-Coder-7B | 🟑 Preview | Code generation, fast inference |
68
- | [Zenith-28B-V1](https://huggingface.co/Matrix-Corp/Zenith-28b-p300-V1) | 28B | Qwen3.5-27B (Claude Opus 4.6 distilled) | 🟑 Preview | Nuanced reasoning, EQ-aware conversations |
69
- | [Zenith-32B-V1](https://huggingface.co/Matrix-Corp/Zenith-32b-p300-V1) | 32B | DeepSeek-R1-Distill-Qwen-32B | 🟑 Preview | Mathematical & structured reasoning |
70
- | [Zenith-70B-V1](https://huggingface.co/Matrix-Corp/Zenith-70b-p300-V1) | 70B | DeepSeek-R1-Distill-Llama-70B | 🟑 Preview | Maximum capability, multi-card setup |
71
-
72
- **Key features:**
73
- - 🧠 EQ Engine V1 β€” frustration detection + 8-emotion classification, fully integrated as core architecture
74
- - ⚑ Ring Attention β€” 32K context on limited memory
75
- - πŸ”€ Mixture of Experts β€” 12 experts, top-2 routing
76
- - πŸ–₯️ p300a optimized β€” TP=8/PP=4 maps 1:1 to all 32 RISC-V cores
77
- - πŸ“¦ Ollama + vLLM compatible
78
 
79
  **Collection:** [Zenith V1](https://huggingface.co/collections/Matrix-Corp/zenith-v1)
80
- **Timeline:** Trained weights + real benchmarks expected in 3–6 months
81
 
82
  ---
83
 
84
  ### πŸ”¬ Vortex Scientific β€” Deep Science Reasoning
85
- *Optimized for Apple Silicon & Nvidia 4060 laptop GPUs*
86
 
87
- From-scratch models built for scientific reasoning across Physics, Mathematics, Chemistry, Biology, Earth Science, Space Science, and Zoology. Novel hybrid state-space + attention architecture with specialized science modules.
88
 
89
- | Model | Parameters | Architecture | Status | Use Case |
90
- |---|---|---|---|---|
91
- | [Vortex-7B-V1](https://huggingface.co/Matrix-Corp/Vortex-7b-V1) | 7B | Hybrid SSM + Attention (60% SSM) | 🟑 Preview | Science reasoning on consumer hardware |
92
- | [Vortex-13B-V1](https://huggingface.co/Matrix-Corp/Vortex-13b-V1) | 13B | Hybrid SSM + Attention (50% SSM) | 🟑 Preview | Advanced scientific reasoning |
93
 
94
- **Key features:**
95
- - πŸ”­ No base model β€” built entirely from scratch including tokenizer
96
- - βš—οΈ Science modules β€” Equation/LaTeX, Numerical Reasoning, Citation Awareness, Molecular/Periodic Table
97
- - πŸ“ Custom science tokenizer β€” 50K vocab with LaTeX symbols, element symbols, SI units, amino acids
98
- - πŸ–₯️ Laptop-first β€” runs on MacBook Pro M2/M3 and Nvidia 4060
99
- - πŸŽ“ School science project
100
 
101
  **Collection:** [Vortex V1](https://huggingface.co/collections/Matrix-Corp/vortex-v1)
102
- **Timeline:** Training data pipeline in progress, weights TBD
103
 
104
  ---
105
 
106
  ### 🌿 Touch Grass β€” Music AI Assistant
107
- *Ultra lightweight, runs on anything*
108
 
109
- Fine-tuned music assistant helping users learn instruments, understand music theory, write songs, train their ear, and explore music history. Warm, encouraging, beginner-friendly.
110
 
111
- | Model | Parameters | Base Model | Status | Use Case |
112
- |---|---|---|---|---|
113
- | [TouchGrass-3B](https://huggingface.co/Matrix-Corp/TouchGrass-3b) | 3B | Qwen3.5-3B-Instruct | 🟑 Preview | Ultra lightweight music assistant |
114
- | [TouchGrass-7B](https://huggingface.co/Matrix-Corp/TouchGrass-7b) | 7B | Qwen3.5-7B-Instruct | 🟑 Preview | Full-featured music assistant |
115
-
116
- **Key features:**
117
- - 🎸 All instruments β€” Guitar, Bass, Piano, Keys, Drums, Vocals, DJ & Production
118
- - 🎡 Tab & chord generation β€” structured, musically validated output
119
- - 🎼 Music theory engine β€” circle of fifths, modes, progressions, voice leading
120
- - πŸ‘‚ Ear training guidance β€” interval recognition, chord quality, relative pitch
121
- - ✍️ Songwriting assistant β€” lyrics, progressions, structure, hooks
122
- - πŸ’š Music EQ adapter β€” frustration detection tuned for music learners
123
- - πŸ“– Genre & music history knowledge
124
 
125
  **Collection:** [Touch Grass](https://huggingface.co/collections/Matrix-Corp/touch-grass)
126
- **Timeline:** Architecture published, training planned
127
 
128
  ---
129
 
130
  ### 🌐 Matrix Lattice β€” Frontier Agentic MoE
131
  *Inference provider deployment Β· Closed source Β· Long-term roadmap*
 
132
 
133
- Flagship frontier agentic + multimodal MoE family. Designed for inference provider deployment (Novita, Hyperbolic, Together, Fireworks). OpenAI-compatible API. 17 custom modules including EQ Engine V2, Multi-Agent Coordination Layer, Hierarchical Context Compression, and Safety Reasoning Module.
134
 
135
  | Model | Total Params | Active Params | Context | Status |
136
  |---|---|---|---|---|
@@ -144,18 +160,27 @@ Flagship frontier agentic + multimodal MoE family. Designed for inference provid
144
 
145
  ### 🎨 Matrix Voxel β€” 3D Generation
146
  *Flow Matching Β· Triplane Latent Β· A100 40GB*
 
147
 
148
- 3D generation model family sharing a flow-matching DiT backbone with task-specific decoder heads. 4 specialist models open source, Voxel Prime closed source API-only.
149
 
150
  | Model | Task | Outputs | Status |
151
  |---|---|---|---|
152
- | Voxel Atlas | World/environment generation | .vox, .obj, .usd | πŸ”΄ Planned Β· 🟒 Open Source |
153
- | Voxel Forge | 3D mesh & asset generation | .obj, .glb, .fbx, .usdz | πŸ”΄ Planned Β· 🟒 Open Source |
154
- | Voxel Cast | 3D printable generation | .stl, .step, .3mf | πŸ”΄ Planned Β· 🟒 Open Source |
155
- | Voxel Lens | NeRF / Gaussian Splatting | .ply (3DGS), NeRF weights | πŸ”΄ Planned Β· 🟒 Open Source |
156
- | Voxel Prime | All-in-one unified | All formats + pipeline mode | πŸ”΄ Planned Β· 🟣 Closed Source |
157
 
158
- **Status:** πŸ”΄ Planned β€” architecture complete
 
 
 
 
 
 
 
 
159
 
160
  ---
161
 
@@ -164,10 +189,10 @@ Flagship frontier agentic + multimodal MoE family. Designed for inference provid
164
  | Status | Meaning |
165
  |---|---|
166
  | 🟒 Released | Trained weights available, benchmarks published |
167
- | 🟑 Preview | Architecture and code published, training in progress or planned |
168
  | πŸ”΄ Planned | Design complete, build not yet started |
169
  | 🩡 Long-Term Roadmap | Vision defined, significant research and compute required |
170
- | 🟣 Closed Source | Architecture may be public, weights and training are proprietary |
171
 
172
  ---
173
 
@@ -175,23 +200,24 @@ Flagship frontier agentic + multimodal MoE family. Designed for inference provid
175
 
176
  We build for **accessible, affordable hardware** β€” not just cloud GPUs:
177
 
178
- | Series | Target Hardware | Why |
179
- |---|---|---|
180
- | Vexa | Any 8-core laptop β€” CPU only | Crystalline Intelligence for everyone, no GPU required |
181
- | Zenith | Tenstorrent p300a | High-performance AI at a fraction of Nvidia cost |
182
- | Vortex | MacBook + 4060 laptop | Science AI for researchers and students anywhere |
183
- | Touch Grass | Any hardware | Music assistance should be universally accessible |
184
- | Matrix Lattice | 16–32Γ— H100 / p300a | Frontier inference for providers |
185
- | Matrix Voxel | A100 40GB | 3D generation on a single card |
 
 
186
 
187
  ---
188
 
189
  ## Reserved Project Names
190
 
191
- These names are allocated to specific future projects β€” not available for other uses:
192
-
193
- - **Vexa** β†’ Crystalline Intelligence Substrate (this project)
194
- - **Axiom** β†’ Future extreme reasoning model (planned)
195
 
196
  ---
197
 
@@ -207,11 +233,12 @@ These names are allocated to specific future projects β€” not available for othe
207
 
208
  ## Collections
209
 
210
- - [Zenith V1](https://huggingface.co/collections/Matrix-Corp/zenith-v1) β€” All Zenith series models
211
- - [Vortex V1](https://huggingface.co/collections/Matrix-Corp/vortex-v1) β€” All Vortex series models
212
- - [Touch Grass](https://huggingface.co/collections/Matrix-Corp/touch-grass) β€” Touch Grass music assistant models
213
- - Matrix-Corp/vexa-v1 *(coming soon)* β€” Vexa Crystalline Intelligence components
214
- - Matrix-Corp/voxel-v1 *(coming soon)* β€” Matrix Voxel 3D generation models
 
215
 
216
  ---
217
 
 
24
  ### πŸ”· Vexa β€” Crystalline Intelligence Substrate
25
  *NOT AN AI MODEL β€” A new computational paradigm*
26
  *Runs on any laptop Β· Fully open source Β· Lume language*
27
+ **Status: πŸ”΄ Spec complete β€” Build in progress**
28
 
29
  > **Vexa is not a neural network. It is a Crystalline Intelligence Substrate** β€” a living, self-updating lattice of Glyphs that crystallizes knowledge in 10 minutes on any laptop, learns from the live web in real time, and runs on Ollama. No gradient descent. No GPU cluster. No frozen knowledge.
30
 
 
43
  | Vexa Bridge (Ollama/vLLM/HF adapter) | Matrix-Corp/Vexa-Bridge | πŸ”΄ Planned |
44
  | Vexa Crystallizer Engine | Matrix-Corp/Vexa-Crystallizer | πŸ”΄ Planned |
45
 
46
+ **Glyph Lattice density tiers:**
47
 
48
  | Tier | Glyphs | RAM | Equivalent | Use Case |
49
  |---|---|---|---|---|
 
54
  | Max | ~10B | 40GB | ~70B LLM | A100 / p300a |
55
 
56
  **Collection:** Matrix-Corp/vexa-v1 *(coming soon)*
57
+
58
+ ---
59
+
60
+ ### ⚑ Kairiq β€” Critical Moment Intelligence Module
61
+ *Universal plug-in module Β· Built entirely in Lume Β· Slap-on adapters*
62
+ **Status: πŸ”΄ Architecture complete β€” Build starting**
63
+
64
+ > **Kairiq is not a model. It is a universal intelligence amplifier** β€” a plug-in module built entirely in `.lume` files that attaches to any Matrix.Corp model and elevates it to elite benchmark performance. A Kairiq-enhanced 32B competes with a vanilla 70B. Not because it knows more β€” because it deploys what it knows at exactly the right moment, the right depth, and the right priority.
65
+
66
+ **The Three KQ Dimensions:**
67
+ - ⏱ **Temporal Acuity (T)** β€” sensing rhythm, pacing, momentum. Knowing when a reasoning path is collapsing before it wastes compute
68
+ - ⚑ **Tension Sensing (X)** β€” reading pressure gradients. Identifying load-bearing sub-problems where failure cascades everywhere
69
+ - πŸ‘‘ **Imperial Hierarchy (H)** β€” the rank and weight of every sub-problem. What overrides what. The actual question beneath the stated question
70
+
71
+ **Three-layer pipeline:** Kairiq Gate β†’ Kairiq Router β†’ Base Model β†’ Kairiq Verifier
72
+
73
+ **Benchmark targets:**
74
+
75
+ | Benchmark | KQ Dimension | Expected Gain |
76
+ |---|---|---|
77
+ | MMLU | H β€” ranks actual question instantly | +4–6% |
78
+ | HumanEval | T β€” kills dead reasoning paths early | +6–9% |
79
+ | MATH | X β€” slow-deep on load-bearing steps | +5–8% |
80
+ | ARC | H β€” no reasoning overkill on simple problems | +3–5% |
81
+ | HellaSwag | T β€” reads narrative momentum correctly | +4–7% |
82
+
83
+ **Written entirely in `.lume`. Python `adapter.py` bridge for any existing model. Zero changes to base model required.**
84
+
85
+ | Component | Repo | Status |
86
+ |---|---|---|
87
+ | Kairiq Module V1 | Matrix-Corp/Kairiq-Module-V1 | πŸ”΄ Planned |
88
+ | Lume Kairiq Spec | Matrix-Corp/Lume-Kairiq-Spec | πŸ”΄ Planned |
89
+ | Zenith-32B-Kairiq-V1 | Matrix-Corp/Zenith-32B-Kairiq-V1 | πŸ”΄ Planned |
90
+
91
+ **Collection:** Matrix-Corp/kairiq-v1 *(coming soon)*
92
 
93
  ---
94
 
95
  ### 🌌 Zenith β€” Reasoning & Emotional Intelligence
96
  *Optimized for Tenstorrent Blackhole p300a hardware*
97
 
98
+ High-performance reasoning models built for the Tenstorrent p300a accelerator (dual-chip, 32 RISC-V cores, 64GB GDDR6). Ring Attention (32K context), Mixture of Experts, and EQ Engine V1 for emotional intelligence.
99
 
100
+ | Model | Parameters | Base Model | Status |
101
+ |---|---|---|---|
102
+ | [Zenith-7B-V1](https://huggingface.co/Matrix-Corp/Zenith-7b-V1) | 7B | Qwen2.5-Coder-7B | 🟑 Preview |
103
+ | [Zenith-28B-V1](https://huggingface.co/Matrix-Corp/Zenith-28b-p300-V1) | 28B | Qwen3.5-27B (Claude Opus 4.6 distilled) | 🟑 Preview |
104
+ | [Zenith-32B-V1](https://huggingface.co/Matrix-Corp/Zenith-32b-p300-V1) | 32B | DeepSeek-R1-Distill-Qwen-32B | 🟑 Preview |
105
+ | [Zenith-70B-V1](https://huggingface.co/Matrix-Corp/Zenith-70b-p300-V1) | 70B | DeepSeek-R1-Distill-Llama-70B | 🟑 Preview |
106
+
107
+ **Key features:** EQ Engine V1 Β· Ring Attention 32K Β· MoE 12 experts top-2 Β· TP=8/PP=4 Β· Ollama + vLLM compatible
 
 
 
 
 
108
 
109
  **Collection:** [Zenith V1](https://huggingface.co/collections/Matrix-Corp/zenith-v1)
 
110
 
111
  ---
112
 
113
  ### πŸ”¬ Vortex Scientific β€” Deep Science Reasoning
114
+ *Optimized for Apple Silicon & Nvidia 4060 Β· Built from scratch β€” no base model*
115
 
116
+ From-scratch models for scientific reasoning across Physics, Mathematics, Chemistry, Biology, Earth Science, Space Science, and Zoology. Hybrid SSM + attention architecture with a custom 50K science tokenizer and 4 specialized science modules.
117
 
118
+ | Model | Parameters | Architecture | Status |
119
+ |---|---|---|---|
120
+ | [Vortex-7B-V1](https://huggingface.co/Matrix-Corp/Vortex-7b-V1) | 7B | Hybrid SSM + Attention (60% SSM) | 🟑 Preview |
121
+ | [Vortex-13B-V1](https://huggingface.co/Matrix-Corp/Vortex-13b-V1) | 13B | Hybrid SSM + Attention (50% SSM) | 🟑 Preview |
122
 
123
+ **Key features:** No base model Β· Custom science tokenizer Β· Equation/LaTeX Β· Molecular/Periodic Table modules Β· Laptop-first
 
 
 
 
 
124
 
125
  **Collection:** [Vortex V1](https://huggingface.co/collections/Matrix-Corp/vortex-v1)
 
126
 
127
  ---
128
 
129
  ### 🌿 Touch Grass β€” Music AI Assistant
130
+ *Ultra lightweight Β· Runs on anything*
131
 
132
+ Fine-tuned music assistant for learning instruments, music theory, songwriting, ear training, and music history. Warm, encouraging, beginner-friendly.
133
 
134
+ | Model | Parameters | Base Model | Status |
135
+ |---|---|---|---|
136
+ | [TouchGrass-3B](https://huggingface.co/Matrix-Corp/TouchGrass-3b) | 3B | Qwen3.5-3B-Instruct | 🟑 Preview |
137
+ | [TouchGrass-7B](https://huggingface.co/Matrix-Corp/TouchGrass-7b) | 7B | Qwen3.5-7B-Instruct | 🟑 Preview |
138
+
139
+ **Key features:** All instruments Β· Tab & chord generation Β· Music theory engine Β· Ear training Β· Songwriting assistant Β· Music EQ adapter
 
 
 
 
 
 
 
140
 
141
  **Collection:** [Touch Grass](https://huggingface.co/collections/Matrix-Corp/touch-grass)
 
142
 
143
  ---
144
 
145
  ### 🌐 Matrix Lattice β€” Frontier Agentic MoE
146
  *Inference provider deployment Β· Closed source Β· Long-term roadmap*
147
+ **Status: 🩡 Long-Term Roadmap · Spec complete**
148
 
149
+ Flagship frontier agentic + multimodal MoE family with 17 custom modules including EQ Engine V2, Multi-Agent Coordination Layer (MACL), Hierarchical Context Compression Engine (HCCE), and Safety Reasoning Module. Designed for inference provider deployment (Novita, Hyperbolic, Together, Fireworks).
150
 
151
  | Model | Total Params | Active Params | Context | Status |
152
  |---|---|---|---|---|
 
160
 
161
  ### 🎨 Matrix Voxel β€” 3D Generation
162
  *Flow Matching Β· Triplane Latent Β· A100 40GB*
163
+ **Status: πŸ”΄ Planned β€” Architecture complete**
164
 
165
+ 3D generation model family sharing a flow-matching DiT backbone (~2.3B) with task-specific decoder heads.
166
 
167
  | Model | Task | Outputs | Status |
168
  |---|---|---|---|
169
+ | Voxel Atlas | World/environment generation | .vox, .obj, .usd | πŸ”΄ Planned Β· 🟒 Open |
170
+ | Voxel Forge | 3D mesh & asset generation | .obj, .glb, .fbx, .usdz | πŸ”΄ Planned Β· 🟒 Open |
171
+ | Voxel Cast | 3D printable generation | .stl, .step, .3mf | πŸ”΄ Planned Β· 🟒 Open |
172
+ | Voxel Lens | NeRF / Gaussian Splatting | .ply (3DGS), NeRF weights | πŸ”΄ Planned Β· 🟒 Open |
173
+ | Voxel Prime | All-in-one unified | All formats | πŸ”΄ Planned Β· 🟣 Closed |
174
 
175
+ ---
176
+
177
+ ### βš’οΈ Matrix Axiom β€” Extreme Reasoning Code Intelligence
178
+ *Reserved Β· Design in progress*
179
+ **Status: πŸ”΄ Reserved β€” Design starting**
180
+
181
+ > Axiom is Matrix.Corp's planned extreme reasoning coding model. Hybrid architecture on a proven base with structurally enforced pre-code reasoning, a self-verify + rewrite loop, internal multi-agent trident (Architect Β· Artisan Β· Auditor), and a custom semantic code tokenizer. Targets #1 on HumanEval. Any hardware. Kairiq-enhanced at launch.
182
+
183
+ *Full spec coming soon.*
184
 
185
  ---
186
 
 
189
  | Status | Meaning |
190
  |---|---|
191
  | 🟒 Released | Trained weights available, benchmarks published |
192
+ | 🟑 Preview | Architecture published, training in progress or planned |
193
  | πŸ”΄ Planned | Design complete, build not yet started |
194
  | 🩡 Long-Term Roadmap | Vision defined, significant research and compute required |
195
+ | 🟣 Closed Source | Weights and training are proprietary |
196
 
197
  ---
198
 
 
200
 
201
  We build for **accessible, affordable hardware** β€” not just cloud GPUs:
202
 
203
+ | Series | Target Hardware |
204
+ |---|---|
205
+ | Vexa | Any 8-core laptop β€” CPU only |
206
+ | Kairiq | Any β€” universal plug-in |
207
+ | Zenith | Tenstorrent Blackhole p300a |
208
+ | Vortex | MacBook M2/M3 + Nvidia 4060 laptop |
209
+ | Touch Grass | Any hardware |
210
+ | Axiom | Any / multi-target |
211
+ | Matrix Lattice | 16–32Γ— H100 / p300a |
212
+ | Matrix Voxel | A100 40GB |
213
 
214
  ---
215
 
216
  ## Reserved Project Names
217
 
218
+ - **Vexa** β†’ Crystalline Intelligence Substrate
219
+ - **Kairiq** β†’ Critical Moment Intelligence Module
220
+ - **Axiom** β†’ Extreme Reasoning Code Intelligence
 
221
 
222
  ---
223
 
 
233
 
234
  ## Collections
235
 
236
+ - [Zenith V1](https://huggingface.co/collections/Matrix-Corp/zenith-v1)
237
+ - [Vortex V1](https://huggingface.co/collections/Matrix-Corp/vortex-v1)
238
+ - [Touch Grass](https://huggingface.co/collections/Matrix-Corp/touch-grass)
239
+ - Matrix-Corp/vexa-v1 *(coming soon)*
240
+ - Matrix-Corp/kairiq-v1 *(coming soon)*
241
+ - Matrix-Corp/voxel-v1 *(coming soon)*
242
 
243
  ---
244