Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
|
@@ -7,239 +7,185 @@ sdk: static
|
|
| 7 |
pinned: true
|
| 8 |
---
|
| 9 |
|
| 10 |
-
# Matrix.Corp
|
| 11 |
|
| 12 |
-
|
| 13 |
|
| 14 |
-
---
|
| 15 |
-
|
| 16 |
-
## Our Philosophy
|
| 17 |
-
|
| 18 |
-
We believe intelligence should be **purpose-built, accessible, and honest**. Every project in our ecosystem is designed from the ground up for a specific domain, user, and hardware target. We focus on novel architectures, hardware-aware optimization, emotional intelligence, and β with Vexa β a completely new computational paradigm that goes beyond AI entirely.
|
| 19 |
|
| 20 |
---
|
| 21 |
|
| 22 |
-
##
|
| 23 |
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 28 |
|
| 29 |
-
|
| 30 |
|
| 31 |
-
|
| 32 |
-
- π· **Glyph Lattice** β structured meaning objects, not weights. Every Glyph has identity, typed relations, confidence, source references, and a decay function
|
| 33 |
-
- β‘ **Crystallization** β replaces training. 5-phase pipeline, 10 min, any 8-core CPU, no GPU needed
|
| 34 |
-
- π **Real-Time Learning** β 3 live threads: Web Crystallizer, Interaction Crystallizer, Decay Monitor. Never goes stale
|
| 35 |
-
- π» **Lume** β declarative-relational programming language where meaning is a first-class citizen
|
| 36 |
-
- π **Vexa Bridge** β Ollama / vLLM / HuggingFace compatible adapter
|
| 37 |
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
| Vexa-V1 (Max density, ~10B Glyphs) | Matrix-Corp/Vexa-V1 | π΄ Planned |
|
| 41 |
-
| Vexa-Micro-V1 (Laptop, ~10M Glyphs, 4GB RAM) | Matrix-Corp/Vexa-Micro-V1 | π΄ Planned |
|
| 42 |
-
| Lume Language Spec + Parser | Matrix-Corp/Lume-Language-Spec | π΄ Planned |
|
| 43 |
-
| Vexa Bridge (Ollama/vLLM/HF adapter) | Matrix-Corp/Vexa-Bridge | π΄ Planned |
|
| 44 |
-
| Vexa Crystallizer Engine | Matrix-Corp/Vexa-Crystallizer | π΄ Planned |
|
| 45 |
|
| 46 |
-
|
| 47 |
|
| 48 |
-
|
|
| 49 |
-
|---|---|---|---|
|
| 50 |
-
|
|
| 51 |
-
|
|
| 52 |
-
|
|
| 53 |
-
|
|
| 54 |
-
| Max | ~10B | 40GB | ~70B LLM | A100 / p300a |
|
| 55 |
|
| 56 |
-
|
| 57 |
|
| 58 |
---
|
| 59 |
|
| 60 |
-
###
|
| 61 |
-
*
|
| 62 |
-
**Status: π€ SCRAPED AND DELETED**
|
| 63 |
-
|
| 64 |
-
> **Kairiq is not a model. It is a universal intelligence amplifier** β a plug-in module built entirely in `.lume` files that attaches to any Matrix.Corp model and elevates it to elite benchmark performance. A Kairiq-enhanced 32B competes with a vanilla 70B. Not because it knows more β because it deploys what it knows at exactly the right moment, the right depth, and the right priority.
|
| 65 |
-
|
| 66 |
-
**The Three KQ Dimensions:**
|
| 67 |
-
- β± **Temporal Acuity (T)** β sensing rhythm, pacing, momentum. Knowing when a reasoning path is collapsing before it wastes compute
|
| 68 |
-
- β‘ **Tension Sensing (X)** β reading pressure gradients. Identifying load-bearing sub-problems where failure cascades everywhere
|
| 69 |
-
- π **Imperial Hierarchy (H)** β the rank and weight of every sub-problem. What overrides what. The actual question beneath the stated question
|
| 70 |
-
|
| 71 |
-
**Three-layer pipeline:** Kairiq Gate β Kairiq Router β Base Model β Kairiq Verifier
|
| 72 |
-
|
| 73 |
-
**Benchmark targets:**
|
| 74 |
-
|
| 75 |
-
| Benchmark | KQ Dimension | Expected Gain |
|
| 76 |
-
|---|---|---|
|
| 77 |
-
| MMLU | H β ranks actual question instantly | +4β6% |
|
| 78 |
-
| HumanEval | T β kills dead reasoning paths early | +6β9% |
|
| 79 |
-
| MATH | X β slow-deep on load-bearing steps | +5β8% |
|
| 80 |
-
| ARC | H β no reasoning overkill on simple problems | +3β5% |
|
| 81 |
-
| HellaSwag | T β reads narrative momentum correctly | +4β7% |
|
| 82 |
|
| 83 |
-
|
| 84 |
|
| 85 |
-
|
|
| 86 |
|---|---|---|
|
| 87 |
-
|
|
| 88 |
-
|
|
| 89 |
-
| Zenith-32B-Kairiq-V1 | Matrix-Corp/Zenith-32B-Kairiq-V1 | π΄ Planned |
|
| 90 |
|
| 91 |
-
|
| 92 |
|
| 93 |
---
|
| 94 |
|
| 95 |
-
###
|
| 96 |
-
*
|
| 97 |
|
| 98 |
-
|
| 99 |
|
| 100 |
-
| Model |
|
| 101 |
|---|---|---|---|
|
| 102 |
-
|
|
| 103 |
-
|
|
| 104 |
-
| [Zenith-32B-V1](https://huggingface.co/Matrix-Corp/Zenith-32b-p300-V1) | 32B | DeepSeek-R1-Distill-Qwen-32B | π‘ Preview |
|
| 105 |
-
| [Zenith-70B-V1](https://huggingface.co/Matrix-Corp/Zenith-70b-p300-V1) | 70B | DeepSeek-R1-Distill-Llama-70B | π‘ Preview |
|
| 106 |
|
| 107 |
-
|
| 108 |
-
|
| 109 |
-
**Collection:** [Zenith V1](https://huggingface.co/collections/Matrix-Corp/zenith-v1)
|
| 110 |
|
| 111 |
---
|
| 112 |
|
| 113 |
-
###
|
| 114 |
-
*
|
| 115 |
-
|
| 116 |
-
From-scratch models for scientific reasoning across Physics, Mathematics, Chemistry, Biology, Earth Science, Space Science, and Zoology. Hybrid SSM + attention architecture with a custom 50K science tokenizer and 4 specialized science modules.
|
| 117 |
|
| 118 |
-
|
| 119 |
-
|---|---|---|---|
|
| 120 |
-
| [Vortex-7B-V1](https://huggingface.co/Matrix-Corp/Vortex-7b-V1) | 7B | Hybrid SSM + Attention (60% SSM) | π‘ Preview |
|
| 121 |
-
| [Vortex-13B-V1](https://huggingface.co/Matrix-Corp/Vortex-13b-V1) | 13B | Hybrid SSM + Attention (50% SSM) | π‘ Preview |
|
| 122 |
|
| 123 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 124 |
|
| 125 |
-
|
| 126 |
|
| 127 |
---
|
| 128 |
|
| 129 |
-
###
|
| 130 |
-
*
|
| 131 |
|
| 132 |
-
|
| 133 |
|
| 134 |
-
|
| 135 |
-
|---|---|---|---|
|
| 136 |
-
| [TouchGrass-3B](https://huggingface.co/Matrix-Corp/TouchGrass-3b) | 3B | Qwen3.5-3B-Instruct | π‘ Preview |
|
| 137 |
-
| [TouchGrass-7B](https://huggingface.co/Matrix-Corp/TouchGrass-7b) | 7B | Qwen3.5-7B-Instruct | π‘ Preview |
|
| 138 |
|
| 139 |
-
|
| 140 |
|
| 141 |
-
**
|
|
|
|
|
|
|
|
|
|
| 142 |
|
| 143 |
-
|
| 144 |
|
| 145 |
-
|
| 146 |
-
*Inference provider deployment Β· Closed source Β· Long-term roadmap*
|
| 147 |
-
**Status: π©΅ Long-Term Roadmap Β· Spec complete**
|
| 148 |
|
| 149 |
-
|
| 150 |
|
| 151 |
-
| Model |
|
| 152 |
-
|---|---|---|---|
|
| 153 |
-
|
|
| 154 |
-
| Lattice-430B | 430B | ~38B | 1M tokens | π©΅ Long-Term Roadmap |
|
| 155 |
-
| Lattice-671B | 671B | ~47B | 1M tokens | π©΅ Long-Term Roadmap |
|
| 156 |
|
| 157 |
-
|
| 158 |
|
| 159 |
---
|
| 160 |
|
| 161 |
### π¨ Matrix Voxel β 3D Generation
|
| 162 |
-
*
|
| 163 |
-
**Status: π΄ Planned β Architecture complete**
|
| 164 |
|
| 165 |
-
|
| 166 |
|
| 167 |
-
| Model | Task | Outputs |
|
| 168 |
|---|---|---|---|
|
| 169 |
-
| Voxel Atlas | World/environment
|
| 170 |
-
| Voxel Forge | 3D mesh &
|
| 171 |
-
| Voxel Cast | 3D printable
|
| 172 |
-
| Voxel Lens | NeRF / Gaussian Splatting | .ply (3DGS)
|
| 173 |
-
| Voxel Prime |
|
| 174 |
|
| 175 |
---
|
| 176 |
|
| 177 |
-
###
|
| 178 |
-
*
|
| 179 |
-
**Status: π΄ Reserved β Design starting**
|
| 180 |
|
| 181 |
-
|
| 182 |
|
| 183 |
-
|
| 184 |
|
| 185 |
-
--
|
| 186 |
-
|
| 187 |
-
## Status Legend
|
| 188 |
-
|
| 189 |
-
| Status | Meaning |
|
| 190 |
-
|---|---|
|
| 191 |
-
| π’ Released | Trained weights available, benchmarks published |
|
| 192 |
-
| π‘ Preview | Architecture published, training in progress or planned |
|
| 193 |
-
| π΄ Planned | Design complete, build not yet started |
|
| 194 |
-
| π©΅ Long-Term Roadmap | Vision defined, significant research and compute required |
|
| 195 |
-
| π£ Closed Source | Weights and training are proprietary |
|
| 196 |
|
| 197 |
---
|
| 198 |
|
| 199 |
-
##
|
| 200 |
-
|
| 201 |
-
We build for **accessible, affordable hardware** β not just cloud GPUs:
|
| 202 |
|
| 203 |
-
|
| 204 |
-
|---|---|
|
| 205 |
-
| Vexa | Any 8-core laptop β CPU only |
|
| 206 |
-
| Kairiq | Any β universal plug-in |
|
| 207 |
-
| Zenith | Tenstorrent Blackhole p300a |
|
| 208 |
-
| Vortex | MacBook M2/M3 + Nvidia 4060 laptop |
|
| 209 |
-
| Touch Grass | Any hardware |
|
| 210 |
-
| Axiom | Any / multi-target |
|
| 211 |
-
| Matrix Lattice | 16β32Γ H100 / p300a |
|
| 212 |
-
| Matrix Voxel | A100 40GB |
|
| 213 |
|
| 214 |
---
|
| 215 |
|
| 216 |
-
##
|
| 217 |
|
| 218 |
-
|
| 219 |
-
-
|
| 220 |
-
-
|
|
|
|
|
|
|
| 221 |
|
| 222 |
---
|
| 223 |
|
| 224 |
-
##
|
| 225 |
|
| 226 |
-
|
| 227 |
|
| 228 |
-
|
| 229 |
-
-
|
| 230 |
-
|
|
|
|
|
|
|
|
|
|
| 231 |
|
| 232 |
---
|
| 233 |
|
| 234 |
-
##
|
| 235 |
|
| 236 |
-
|
| 237 |
-
-
|
| 238 |
-
|
| 239 |
-
|
| 240 |
-
|
| 241 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 242 |
|
| 243 |
---
|
| 244 |
|
| 245 |
-
*
|
|
|
|
| 7 |
pinned: true
|
| 8 |
---
|
| 9 |
|
| 10 |
+
# Matrix.Corp
|
| 11 |
|
| 12 |
+
Independent AI research organization building specialized models, frontier agentic systems, and new intelligence paradigms.
|
| 13 |
|
| 14 |
+
**HuggingFace:** [Matrix-Corp](https://huggingface.co/Matrix-Corp) Β· **Founded by:** [Zandy-Wandy](https://huggingface.co/Zandy-Wandy) Β· **GitHub:** [zapgaming](https://github.com/zapgaming)
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
|
| 16 |
---
|
| 17 |
|
| 18 |
+
## Status Legend
|
| 19 |
|
| 20 |
+
| Badge | Meaning |
|
| 21 |
+
|---|---|
|
| 22 |
+
| π’ Released | Weights available, ready to use |
|
| 23 |
+
| π‘ Preview | Architecture published, training planned |
|
| 24 |
+
| π΄ Planned | Design complete, not yet built |
|
| 25 |
+
| π©΅ Long-Term | Vision defined, major research ahead |
|
| 26 |
+
| π£ Closed | Proprietary weights |
|
| 27 |
+
| β¬ Deprecated | Cancelled or superseded |
|
| 28 |
|
| 29 |
+
---
|
| 30 |
|
| 31 |
+
## Models
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 32 |
|
| 33 |
+
### π Zenith β Reasoning + Emotional Intelligence
|
| 34 |
+
**Status:** π‘ Preview Β· **Target:** Tenstorrent Blackhole p300a
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 35 |
|
| 36 |
+
Transformer models with a built-in EQ Engine β a dedicated emotional intelligence layer that sits alongside the reasoning stack. Ring Attention (32K), MoE (12 experts top-2), Ollama + vLLM compatible.
|
| 37 |
|
| 38 |
+
| Model | Params | Base | Link |
|
| 39 |
+
|---|---|---|---|
|
| 40 |
+
| Zenith-7B-V1 | 7B | Qwen2.5-Coder-7B | [β](https://huggingface.co/Matrix-Corp/Zenith-7b-V1) |
|
| 41 |
+
| Zenith-28B-V1 | 28B | Qwen3.5-27B (Opus 4.6 distilled) | [β](https://huggingface.co/Matrix-Corp/Zenith-28b-p300-V1) |
|
| 42 |
+
| Zenith-32B-V1 | 32B | DeepSeek-R1-Distill-Qwen-32B | [β](https://huggingface.co/Matrix-Corp/Zenith-32b-V1-Tenstorrent-Blackhole-p300) |
|
| 43 |
+
| Zenith-70B-V1 | 70B | DeepSeek-R1-Distill-Llama-70B | [β](https://huggingface.co/Matrix-Corp/Zenith-70b-V1-Tenstorrent-Blackhole-p300) |
|
|
|
|
| 44 |
|
| 45 |
+
[View Zenith Collection β](https://huggingface.co/collections/Matrix-Corp/zenith-v1)
|
| 46 |
|
| 47 |
---
|
| 48 |
|
| 49 |
+
### π¬ Vortex Scientific β Deep Science Reasoning
|
| 50 |
+
**Status:** π‘ Preview Β· **Target:** MacBook M2/M3 + Nvidia 4060
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 51 |
|
| 52 |
+
Built from scratch β no base model. Custom 50K science tokenizer. Hybrid SSM+Attention architecture with four domain-specific modules: Equation/LaTeX, Numerical, Citation, and Molecular/Periodic Table.
|
| 53 |
|
| 54 |
+
| Model | Params | Link |
|
| 55 |
|---|---|---|
|
| 56 |
+
| Vortex-7B-V1 | 7B | [β](https://huggingface.co/Matrix-Corp/Vortex-7b-V1) |
|
| 57 |
+
| Vortex-13B-V1 | 13B | [β](https://huggingface.co/Matrix-Corp/Vortex-13b-V1) |
|
|
|
|
| 58 |
|
| 59 |
+
[View Vortex Collection β](https://huggingface.co/collections/Matrix-Corp/vortex-v1)
|
| 60 |
|
| 61 |
---
|
| 62 |
|
| 63 |
+
### πΏ Touch Grass β Music AI
|
| 64 |
+
**Status:** π‘ Preview Β· **Target:** Any hardware
|
| 65 |
|
| 66 |
+
LoRA fine-tune on Qwen3.5 built for musicians. Tab & Chord Module, Music Theory Engine, Ear Training, EQ Adapter (4 emotional modes), Songwriting Module.
|
| 67 |
|
| 68 |
+
| Model | Params | Base | Link |
|
| 69 |
|---|---|---|---|
|
| 70 |
+
| TouchGrass-3B | 3B | Qwen3.5-3B-Instruct | [β](https://huggingface.co/Matrix-Corp/TouchGrass-3b) |
|
| 71 |
+
| TouchGrass-7B | 7B | Qwen3.5-7B-Instruct | [β](https://huggingface.co/Matrix-Corp/TouchGrass-7b) |
|
|
|
|
|
|
|
| 72 |
|
| 73 |
+
[View Touch Grass Collection β](https://huggingface.co/collections/Matrix-Corp/touch-grass)
|
|
|
|
|
|
|
| 74 |
|
| 75 |
---
|
| 76 |
|
| 77 |
+
### π Matrix Lattice β Frontier Agentic MoE
|
| 78 |
+
**Status:** π’ Released Β· π£ Closed Source Β· **Target:** 4β32Γ H100 / Tenstorrent p300a
|
|
|
|
|
|
|
| 79 |
|
| 80 |
+
**Shipped.** Our largest and most capable system. Frontier-scale mixture-of-experts with 17 custom intelligence modules including: EQ Engine V2, Multi-Agent Coordination Layer (MACL), Hierarchical Context Compression Engine (HCCE), Causal Reasoning Graph, Long-Horizon Task Planner, Confidence Calibration Head, Safety Reasoning Module (SRM), and more. 1M token context across all tiers.
|
|
|
|
|
|
|
|
|
|
| 81 |
|
| 82 |
+
| Model | Total Params | Active Params | Experts | Context | Link |
|
| 83 |
+
|---|---|---|---|---|---|
|
| 84 |
+
| Lattice-120B | 120B | ~22B | 64 top-4 | 1M | [β](https://huggingface.co/Matrix-Corp/Lattice-120B-V1) |
|
| 85 |
+
| Lattice-430B | 430B | ~38B | 128 top-4 | 1M | [β](https://huggingface.co/Matrix-Corp/Lattice-430B-V1) |
|
| 86 |
+
| Lattice-671B | 671B | ~47B | 256 top-4 | 1M | [β](https://huggingface.co/Matrix-Corp/Lattice-671B-V1) |
|
| 87 |
|
| 88 |
+
[View Lattice Collection β](https://huggingface.co/collections/Matrix-Corp/lattice-v1)
|
| 89 |
|
| 90 |
---
|
| 91 |
|
| 92 |
+
### π©Έ Matrix ECHO β Living Error Memory
|
| 93 |
+
**Status:** π΄ Build In Progress Β· π’ Open Source Β· **Language:** Rust
|
| 94 |
|
| 95 |
+
**The model that remembers how it was wrong.**
|
| 96 |
|
| 97 |
+
ECHO is a 27B coding-focused LLM built on `Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled`, running fully in Rust via HuggingFace `candle`. Every correction it receives crystallizes into a **Scar** β a typed, weighted memory object stored in a live petgraph lattice.
|
|
|
|
|
|
|
|
|
|
| 98 |
|
| 99 |
+
Before every response, ECHO scans its Scar lattice for similar past mistakes. The more it's corrected, the harder it is to fool. Mistakes are not erased β they become assets.
|
| 100 |
|
| 101 |
+
**Core loop:**
|
| 102 |
+
```
|
| 103 |
+
prompt β pre-scan Scar lattice β inject caution context β generate β correction β new Scar forms
|
| 104 |
+
```
|
| 105 |
|
| 106 |
+
**Scar types:** Factual Β· Logical Β· Contextual Β· Hallucination Β· Overconfidence
|
| 107 |
|
| 108 |
+
**Domain Weakness Map** β ECHO tracks which topics it's systematically weak in and suppresses confidence automatically in high-risk domains.
|
|
|
|
|
|
|
| 109 |
|
| 110 |
+
**OpenAI-compatible API** β drop-in via `POST /v1/chat/completions`. Corrections via `POST /v1/echo/correct`.
|
| 111 |
|
| 112 |
+
| Model | Params | Base | Language |
|
| 113 |
+
|---|---|---|---|
|
| 114 |
+
| ECHO-27B-V1 | 27B | Qwen3.5-27B (Opus 4.6 distilled) | Rust + candle |
|
|
|
|
|
|
|
| 115 |
|
| 116 |
+
[View ECHO Collection β](https://huggingface.co/collections/Matrix-Corp/echo-v1)
|
| 117 |
|
| 118 |
---
|
| 119 |
|
| 120 |
### π¨ Matrix Voxel β 3D Generation
|
| 121 |
+
**Status:** π΄ Planned Β· **Target:** A100 40GB
|
|
|
|
| 122 |
|
| 123 |
+
Flow-matching DiT backbone (~2.3B) with task-specific decoder heads. Generates 3D meshes, environments, printable models, and NeRF/Gaussian Splatting outputs.
|
| 124 |
|
| 125 |
+
| Model | Task | Outputs | License |
|
| 126 |
|---|---|---|---|
|
| 127 |
+
| Voxel Atlas | World/environment gen | .vox, .obj, .usd | π’ Open |
|
| 128 |
+
| Voxel Forge | 3D mesh & assets | .obj, .glb, .fbx, .usdz | π’ Open |
|
| 129 |
+
| Voxel Cast | 3D printable | .stl, .step, .3mf | π’ Open |
|
| 130 |
+
| Voxel Lens | NeRF / Gaussian Splatting | .ply (3DGS) | π’ Open |
|
| 131 |
+
| Voxel Prime | Unified all-in-one | All formats | π£ Closed |
|
| 132 |
|
| 133 |
---
|
| 134 |
|
| 135 |
+
### π· Matrix Vexa β Crystalline Intelligence Substrate
|
| 136 |
+
**Status:** π΄ Paused Β· π’ Open Source
|
|
|
|
| 137 |
|
| 138 |
+
Vexa is not a model. It is a new intelligence paradigm β a living lattice of **Glyphs** (structured meaning objects) that grows through **Crystallization** instead of training. 10 minutes on any CPU. No GPU required. Knowledge never goes stale β three background threads continuously update from the web, interactions, and decay.
|
| 139 |
|
| 140 |
+
Full paradigm definition and build prompt complete. Build paused, will resume.
|
| 141 |
|
| 142 |
+
[View Vexa Collection β](https://huggingface.co/collections/Matrix-Corp/vexa-v1)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 143 |
|
| 144 |
---
|
| 145 |
|
| 146 |
+
### β¬ ~~Kairiq β Critical Moment Intelligence Module~~
|
| 147 |
+
**Status:** β¬ Deprecated
|
|
|
|
| 148 |
|
| 149 |
+
A Lume-native intelligence amplifier module designed to wrap Matrix models. Deprecated β the custom Lume language runtime exceeded practical build complexity. Core ideas (pre-scan, confidence suppression, domain routing) absorbed into ECHO.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 150 |
|
| 151 |
---
|
| 152 |
|
| 153 |
+
## Paradigms
|
| 154 |
|
| 155 |
+
| Name | Type | Status |
|
| 156 |
+
|---|---|---|
|
| 157 |
+
| Crystalline Intelligence (Vexa) | Non-neural knowledge substrate | π΄ Paused |
|
| 158 |
+
| Living Error Memory (ECHO) | Scar-based mistake crystallization | π΄ Build In Progress |
|
| 159 |
+
| Ferric Attention | Ownership-typed attention mechanism | π©΅ Research concept |
|
| 160 |
|
| 161 |
---
|
| 162 |
|
| 163 |
+
## Reserved Names
|
| 164 |
|
| 165 |
+
These names are allocated to specific projects. Not available for other uses.
|
| 166 |
|
| 167 |
+
| Name | Allocated To |
|
| 168 |
+
|---|---|
|
| 169 |
+
| Vexa | Crystalline Intelligence Substrate |
|
| 170 |
+
| ECHO | Living Error Memory LLM |
|
| 171 |
+
| Axiom | Future extreme reasoning model (planned) |
|
| 172 |
+
| Lume | Declarative-relational language for Vexa |
|
| 173 |
|
| 174 |
---
|
| 175 |
|
| 176 |
+
## Licensing
|
| 177 |
|
| 178 |
+
| Model Family | License |
|
| 179 |
+
|---|---|
|
| 180 |
+
| Zenith | Apache 2.0 |
|
| 181 |
+
| Vortex | Apache 2.0 |
|
| 182 |
+
| Touch Grass | Apache 2.0 |
|
| 183 |
+
| Matrix Lattice | Proprietary |
|
| 184 |
+
| Matrix ECHO | Apache 2.0 |
|
| 185 |
+
| Matrix Voxel (open tiers) | Apache 2.0 |
|
| 186 |
+
| Matrix Voxel Prime | Proprietary |
|
| 187 |
+
| Vexa | Apache 2.0 |
|
| 188 |
|
| 189 |
---
|
| 190 |
|
| 191 |
+
*Matrix.Corp β building intelligence that knows its own limits.*
|