Zandy-Wandy commited on
Commit
f5b7a06
Β·
verified Β·
1 Parent(s): eddcc43

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +88 -31
README.md CHANGED
@@ -9,17 +9,53 @@ pinned: true
9
 
10
  # Matrix.Corp πŸš€
11
 
12
- Welcome to **Matrix.Corp** β€” an independent AI research organization building specialized language models for real-world use cases.
13
 
14
  ---
15
 
16
  ## Our Philosophy
17
 
18
- We believe AI models should be **purpose-built**, not one-size-fits-all. Every model in our family is designed from the ground up for a specific domain, user, and hardware target. We focus on novel architectures, hardware-aware optimization, and emotional intelligence across all our models.
19
 
20
  ---
21
 
22
- ## Model Families
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
 
24
  ### 🌌 Zenith β€” Reasoning & Emotional Intelligence
25
  *Optimized for Tenstorrent Blackhole p300a hardware*
@@ -30,16 +66,17 @@ High-performance reasoning models built for the Tenstorrent p300a accelerator (d
30
  |---|---|---|---|---|
31
  | [Zenith-7B-V1](https://huggingface.co/Matrix-Corp/Zenith-7b-V1) | 7B | Qwen2.5-Coder-7B | 🟑 Preview | Code generation, fast inference |
32
  | [Zenith-28B-V1](https://huggingface.co/Matrix-Corp/Zenith-28b-p300-V1) | 28B | Qwen3.5-27B (Claude Opus 4.6 distilled) | 🟑 Preview | Nuanced reasoning, EQ-aware conversations |
33
- | [Zenith-32B-V1](https://huggingface.co/Matrix-Corp/Zenith-32b-V1-Tenstorrent-Blackhole-p300) | 32B | DeepSeek-R1-Distill-Qwen-32B | 🟑 Preview | Mathematical & structured reasoning |
34
- | [Zenith-70B-V1](https://huggingface.co/Matrix-Corp/Zenith-70b-V1-Tenstorrent-Blackhole-p300) | 70B | DeepSeek-R1-Distill-Llama-70B | 🟑 Preview | Maximum capability, multi-card setup |
35
 
36
  **Key features:**
37
- - 🧠 EQ Engine β€” frustration detection + 8-emotion classification, fully integrated as core architecture (V1)
38
  - ⚑ Ring Attention β€” 32K context on limited memory
39
  - πŸ”€ Mixture of Experts β€” 12 experts, top-2 routing
40
  - πŸ–₯️ p300a optimized β€” TP=8/PP=4 maps 1:1 to all 32 RISC-V cores
41
  - πŸ“¦ Ollama + vLLM compatible
42
 
 
43
  **Timeline:** Trained weights + real benchmarks expected in 3–6 months
44
 
45
  ---
@@ -61,6 +98,7 @@ From-scratch models built for scientific reasoning across Physics, Mathematics,
61
  - πŸ–₯️ Laptop-first β€” runs on MacBook Pro M2/M3 and Nvidia 4060
62
  - πŸŽ“ School science project
63
 
 
64
  **Timeline:** Training data pipeline in progress, weights TBD
65
 
66
  ---
@@ -84,33 +122,40 @@ Fine-tuned music assistant helping users learn instruments, understand music the
84
  - πŸ’š Music EQ adapter β€” frustration detection tuned for music learners
85
  - πŸ“– Genre & music history knowledge
86
 
87
- **Timeline:** Architecture and code published, trained weights coming soon
 
88
 
89
  ---
90
 
91
- ### πŸ•ΈοΈ Matrix Lattice β€” Frontier Agentic & Multimodal
92
- *Targeting inference providers β€” API access only*
93
 
94
- Matrix.Corp's flagship frontier family. Massive MoE models built for deployment on inference provider infrastructure (Novita, Hyperbolic, Together, Fireworks, etc.) and accessed via OpenAI-compatible API. Agentic-first, natively multimodal, 1M+ context window, 17 custom modules.
95
 
96
- | Model | Parameters | Active Params | Status | Access |
97
  |---|---|---|---|---|
98
- | Lattice-120B | 120B | ~22B | 🩡 Long-Term Roadmap | 🟣 Closed Source |
99
- | Lattice-430B | 430B | ~38B | 🩡 Long-Term Roadmap | 🟣 Closed Source |
100
- | Lattice-671B | 671B | ~47B | 🩡 Long-Term Roadmap | 🟣 Closed Source |
101
 
102
- **Key features:**
103
- - πŸ•ΈοΈ Hierarchical MoE β€” 64/128/256 experts, top-4 routing, domain-cluster aware
104
- - 🧠 17 custom modules β€” EQ Engine V2, Lattice Router, Confidence Head, MACL, HCCE, Safety Reasoning, and more
105
- - πŸ‘οΈ Native multimodal β€” vision (ViT, 4K tiling), audio (Whisper lineage), documents, video frames
106
- - πŸ€– Agentic-first β€” native tool calling, multi-agent coordination, long-horizon task planning
107
- - πŸ“ 1M+ context β€” MLA attention + hierarchical context compression make it practical
108
- - ⚑ Speculative decoding β€” paired draft models for 3–5Γ— inference speedup
109
- - πŸ”Œ OpenAI-compatible API β€” with Lattice extensions (confidence scores, module trace, hallucination risk)
110
 
111
- **Timeline:** Architecture specification complete. This is a long-term project requiring significant compute infrastructure. No training ETA at this time.
 
 
 
 
 
 
112
 
113
- **Access:** Closed source β€” will be available exclusively via inference provider APIs when released. Weights will not be publicly distributed.
114
 
115
  ---
116
 
@@ -119,10 +164,10 @@ Matrix.Corp's flagship frontier family. Massive MoE models built for deployment
119
  | Status | Meaning |
120
  |---|---|
121
  | 🟒 Released | Trained weights available, benchmarks published |
122
- | 🟑 Preview | Architecture and code published, training in progress |
123
  | πŸ”΄ Planned | Design complete, build not yet started |
124
- | 🩡 Long-Term Roadmap | Architecture complete, significant compute required, no ETA |
125
- | 🟣 Closed Source | Weights will not be publicly released |
126
 
127
  ---
128
 
@@ -132,10 +177,21 @@ We build for **accessible, affordable hardware** β€” not just cloud GPUs:
132
 
133
  | Series | Target Hardware | Why |
134
  |---|---|---|
 
135
  | Zenith | Tenstorrent p300a | High-performance AI at a fraction of Nvidia cost |
136
  | Vortex | MacBook + 4060 laptop | Science AI for researchers and students anywhere |
137
  | Touch Grass | Any hardware | Music assistance should be universally accessible |
138
- | Lattice | Inference provider clusters (H100 / p300a) | Frontier capability via API β€” no local hardware needed |
 
 
 
 
 
 
 
 
 
 
139
 
140
  ---
141
 
@@ -153,9 +209,10 @@ We build for **accessible, affordable hardware** β€” not just cloud GPUs:
153
 
154
  - [Zenith V1](https://huggingface.co/collections/Matrix-Corp/zenith-v1) β€” All Zenith series models
155
  - [Vortex V1](https://huggingface.co/collections/Matrix-Corp/vortex-v1) β€” All Vortex series models
156
- - [Touch Grass](https://huggingface.co/collections/Matrix-Corp/touch-grass) β€” All Touch Grass series models
157
- - Lattice *(collection coming when models are published)*
 
158
 
159
  ---
160
 
161
- *Building specialized AI, one model at a time.* πŸš€
 
9
 
10
  # Matrix.Corp πŸš€
11
 
12
+ Welcome to **Matrix.Corp** β€” an independent AI research organization pushing the boundaries of what intelligence can be. We build specialized models, novel architectures, and entirely new paradigms for real-world use cases.
13
 
14
  ---
15
 
16
  ## Our Philosophy
17
 
18
+ We believe intelligence should be **purpose-built, accessible, and honest**. Every project in our ecosystem is designed from the ground up for a specific domain, user, and hardware target. We focus on novel architectures, hardware-aware optimization, emotional intelligence, and β€” with Vexa β€” a completely new computational paradigm that goes beyond AI entirely.
19
 
20
  ---
21
 
22
+ ## Projects & Model Families
23
+
24
+ ### πŸ”· Vexa β€” Crystalline Intelligence Substrate
25
+ *NOT AN AI MODEL β€” A new computational paradigm*
26
+ *Runs on any laptop Β· Fully open source Β· Lume language*
27
+
28
+ > **Vexa is not a neural network. It is a Crystalline Intelligence Substrate** β€” a living, self-updating lattice of Glyphs that crystallizes knowledge in 10 minutes on any laptop, learns from the live web in real time, and runs on Ollama. No gradient descent. No GPU cluster. No frozen knowledge.
29
+
30
+ **Core components:**
31
+ - πŸ”· **Glyph Lattice** β€” structured meaning objects, not weights. Every Glyph has identity, typed relations, confidence, source references, and a decay function
32
+ - ⚑ **Crystallization** β€” replaces training. 5-phase pipeline, 10 min, any 8-core CPU, no GPU needed
33
+ - 🌐 **Real-Time Learning** β€” 3 live threads: Web Crystallizer, Interaction Crystallizer, Decay Monitor. Never goes stale
34
+ - πŸ’» **Lume** β€” declarative-relational programming language where meaning is a first-class citizen
35
+ - πŸ”Œ **Vexa Bridge** β€” Ollama / vLLM / HuggingFace compatible adapter
36
+
37
+ | Component | Repo | Status |
38
+ |---|---|---|
39
+ | Vexa-V1 (Max density, ~10B Glyphs) | Matrix-Corp/Vexa-V1 | πŸ”΄ Planned |
40
+ | Vexa-Micro-V1 (Laptop, ~10M Glyphs, 4GB RAM) | Matrix-Corp/Vexa-Micro-V1 | πŸ”΄ Planned |
41
+ | Lume Language Spec + Parser | Matrix-Corp/Lume-Language-Spec | πŸ”΄ Planned |
42
+ | Vexa Bridge (Ollama/vLLM/HF adapter) | Matrix-Corp/Vexa-Bridge | πŸ”΄ Planned |
43
+ | Vexa Crystallizer Engine | Matrix-Corp/Vexa-Crystallizer | πŸ”΄ Planned |
44
+
45
+ **Glyph Lattice density tiers (one architecture, scaled by density):**
46
+
47
+ | Tier | Glyphs | RAM | Equivalent | Use Case |
48
+ |---|---|---|---|---|
49
+ | Nano | ~1M | 2GB | ~1B LLM | IoT, edge devices |
50
+ | Micro | ~10M | 4GB | ~7B LLM | Laptops, Raspberry Pi |
51
+ | Core | ~100M | 8GB | ~13B LLM | Consumer GPU |
52
+ | Dense | ~1B | 16GB | ~30B LLM | Workstation |
53
+ | Max | ~10B | 40GB | ~70B LLM | A100 / p300a |
54
+
55
+ **Collection:** Matrix-Corp/vexa-v1 *(coming soon)*
56
+ **Status:** πŸ”΄ Planned β€” paradigm definition complete, implementation starting
57
+
58
+ ---
59
 
60
  ### 🌌 Zenith β€” Reasoning & Emotional Intelligence
61
  *Optimized for Tenstorrent Blackhole p300a hardware*
 
66
  |---|---|---|---|---|
67
  | [Zenith-7B-V1](https://huggingface.co/Matrix-Corp/Zenith-7b-V1) | 7B | Qwen2.5-Coder-7B | 🟑 Preview | Code generation, fast inference |
68
  | [Zenith-28B-V1](https://huggingface.co/Matrix-Corp/Zenith-28b-p300-V1) | 28B | Qwen3.5-27B (Claude Opus 4.6 distilled) | 🟑 Preview | Nuanced reasoning, EQ-aware conversations |
69
+ | [Zenith-32B-V1](https://huggingface.co/Matrix-Corp/Zenith-32b-p300-V1) | 32B | DeepSeek-R1-Distill-Qwen-32B | 🟑 Preview | Mathematical & structured reasoning |
70
+ | [Zenith-70B-V1](https://huggingface.co/Matrix-Corp/Zenith-70b-p300-V1) | 70B | DeepSeek-R1-Distill-Llama-70B | 🟑 Preview | Maximum capability, multi-card setup |
71
 
72
  **Key features:**
73
+ - 🧠 EQ Engine V1 β€” frustration detection + 8-emotion classification, fully integrated as core architecture
74
  - ⚑ Ring Attention β€” 32K context on limited memory
75
  - πŸ”€ Mixture of Experts β€” 12 experts, top-2 routing
76
  - πŸ–₯️ p300a optimized β€” TP=8/PP=4 maps 1:1 to all 32 RISC-V cores
77
  - πŸ“¦ Ollama + vLLM compatible
78
 
79
+ **Collection:** [Zenith V1](https://huggingface.co/collections/Matrix-Corp/zenith-v1)
80
  **Timeline:** Trained weights + real benchmarks expected in 3–6 months
81
 
82
  ---
 
98
  - πŸ–₯️ Laptop-first β€” runs on MacBook Pro M2/M3 and Nvidia 4060
99
  - πŸŽ“ School science project
100
 
101
+ **Collection:** [Vortex V1](https://huggingface.co/collections/Matrix-Corp/vortex-v1)
102
  **Timeline:** Training data pipeline in progress, weights TBD
103
 
104
  ---
 
122
  - πŸ’š Music EQ adapter β€” frustration detection tuned for music learners
123
  - πŸ“– Genre & music history knowledge
124
 
125
+ **Collection:** [Touch Grass](https://huggingface.co/collections/Matrix-Corp/touch-grass)
126
+ **Timeline:** Architecture published, training planned
127
 
128
  ---
129
 
130
+ ### 🌐 Matrix Lattice β€” Frontier Agentic MoE
131
+ *Inference provider deployment Β· Closed source Β· Long-term roadmap*
132
 
133
+ Flagship frontier agentic + multimodal MoE family. Designed for inference provider deployment (Novita, Hyperbolic, Together, Fireworks). OpenAI-compatible API. 17 custom modules including EQ Engine V2, Multi-Agent Coordination Layer, Hierarchical Context Compression, and Safety Reasoning Module.
134
 
135
+ | Model | Total Params | Active Params | Context | Status |
136
  |---|---|---|---|---|
137
+ | Lattice-120B | 120B | ~22B | 1M tokens | 🩡 Long-Term Roadmap |
138
+ | Lattice-430B | 430B | ~38B | 1M tokens | 🩡 Long-Term Roadmap |
139
+ | Lattice-671B | 671B | ~47B | 1M tokens | 🩡 Long-Term Roadmap |
140
 
141
+ **Status:** 🩡 Long-Term Roadmap · 🟣 Closed Source
142
+
143
+ ---
144
+
145
+ ### 🎨 Matrix Voxel β€” 3D Generation
146
+ *Flow Matching Β· Triplane Latent Β· A100 40GB*
147
+
148
+ 3D generation model family sharing a flow-matching DiT backbone with task-specific decoder heads. 4 specialist models open source, Voxel Prime closed source API-only.
149
 
150
+ | Model | Task | Outputs | Status |
151
+ |---|---|---|---|
152
+ | Voxel Atlas | World/environment generation | .vox, .obj, .usd | πŸ”΄ Planned Β· 🟒 Open Source |
153
+ | Voxel Forge | 3D mesh & asset generation | .obj, .glb, .fbx, .usdz | πŸ”΄ Planned Β· 🟒 Open Source |
154
+ | Voxel Cast | 3D printable generation | .stl, .step, .3mf | πŸ”΄ Planned Β· 🟒 Open Source |
155
+ | Voxel Lens | NeRF / Gaussian Splatting | .ply (3DGS), NeRF weights | πŸ”΄ Planned Β· 🟒 Open Source |
156
+ | Voxel Prime | All-in-one unified | All formats + pipeline mode | πŸ”΄ Planned Β· 🟣 Closed Source |
157
 
158
+ **Status:** πŸ”΄ Planned β€” architecture complete
159
 
160
  ---
161
 
 
164
  | Status | Meaning |
165
  |---|---|
166
  | 🟒 Released | Trained weights available, benchmarks published |
167
+ | 🟑 Preview | Architecture and code published, training in progress or planned |
168
  | πŸ”΄ Planned | Design complete, build not yet started |
169
+ | 🩡 Long-Term Roadmap | Vision defined, significant research and compute required |
170
+ | 🟣 Closed Source | Architecture may be public, weights and training are proprietary |
171
 
172
  ---
173
 
 
177
 
178
  | Series | Target Hardware | Why |
179
  |---|---|---|
180
+ | Vexa | Any 8-core laptop β€” CPU only | Crystalline Intelligence for everyone, no GPU required |
181
  | Zenith | Tenstorrent p300a | High-performance AI at a fraction of Nvidia cost |
182
  | Vortex | MacBook + 4060 laptop | Science AI for researchers and students anywhere |
183
  | Touch Grass | Any hardware | Music assistance should be universally accessible |
184
+ | Matrix Lattice | 16–32Γ— H100 / p300a | Frontier inference for providers |
185
+ | Matrix Voxel | A100 40GB | 3D generation on a single card |
186
+
187
+ ---
188
+
189
+ ## Reserved Project Names
190
+
191
+ These names are allocated to specific future projects β€” not available for other uses:
192
+
193
+ - **Vexa** β†’ Crystalline Intelligence Substrate (this project)
194
+ - **Axiom** β†’ Future extreme reasoning model (planned)
195
 
196
  ---
197
 
 
209
 
210
  - [Zenith V1](https://huggingface.co/collections/Matrix-Corp/zenith-v1) β€” All Zenith series models
211
  - [Vortex V1](https://huggingface.co/collections/Matrix-Corp/vortex-v1) β€” All Vortex series models
212
+ - [Touch Grass](https://huggingface.co/collections/Matrix-Corp/touch-grass) β€” Touch Grass music assistant models
213
+ - Matrix-Corp/vexa-v1 *(coming soon)* β€” Vexa Crystalline Intelligence components
214
+ - Matrix-Corp/voxel-v1 *(coming soon)* β€” Matrix Voxel 3D generation models
215
 
216
  ---
217
 
218
+ *Building the future of intelligence β€” one paradigm at a time.* πŸš€