README / README.md
Zandy-Wandy's picture
Update README.md
8ddd3a9 verified
---
title: Matrix.Corp
emoji: πŸ¦β€πŸ”₯
colorFrom: indigo
colorTo: pink
sdk: static
pinned: true
---
# Matrix.Corp
Independent AI research organization building specialized models, frontier agentic systems, and new intelligence paradigms.
**HuggingFace:** [Matrix-Corp](https://huggingface.co/Matrix-Corp) Β· **Founded by:** [Zandy-Wandy](https://huggingface.co/Zandy-Wandy) Β· **GitHub:** [zapgaming](https://github.com/zapgaming)
---
## Status Legend
| Badge | Meaning |
|---|---|
| 🟒 Released | Weights available, ready to use |
| 🟑 Preview | Architecture published, training planned |
| πŸ”΄ Planned | Design complete, not yet built |
| 🩡 Long-Term | Vision defined, major research ahead |
| 🟣 Closed | Proprietary weights |
| ⬛ Deprecated | Cancelled or superseded |
---
## Models
### 🌌 Zenith β€” Reasoning + Emotional Intelligence
**Status:** 🟑 Preview · **Target:** Tenstorrent Blackhole p300a
Transformer models with a built-in EQ Engine β€” a dedicated emotional intelligence layer that sits alongside the reasoning stack. Ring Attention (32K), MoE (12 experts top-2), Ollama + vLLM compatible.
| Model | Params | Base | Link |
|---|---|---|---|
| Zenith-7B-V1 | 7B | Qwen2.5-Coder-7B | [β†’](https://huggingface.co/Matrix-Corp/Zenith-7b-V1) |
| Zenith-28B-V1 | 28B | Qwen3.5-27B (Opus 4.6 distilled) | [β†’](https://huggingface.co/Matrix-Corp/Zenith-28b-p300-V1) |
| Zenith-32B-V1 | 32B | DeepSeek-R1-Distill-Qwen-32B | [β†’](https://huggingface.co/Matrix-Corp/Zenith-32b-V1-Tenstorrent-Blackhole-p300) |
| Zenith-70B-V1 | 70B | DeepSeek-R1-Distill-Llama-70B | [β†’](https://huggingface.co/Matrix-Corp/Zenith-70b-V1-Tenstorrent-Blackhole-p300) |
[View Zenith Collection β†’](https://huggingface.co/collections/Matrix-Corp/zenith-v1)
---
### πŸ”¬ Vortex Scientific β€” Deep Science Reasoning
**Status:** 🟑 Preview · **Target:** MacBook M2/M3 + Nvidia 4060
Built from scratch β€” no base model. Custom 50K science tokenizer. Hybrid SSM+Attention architecture with four domain-specific modules: Equation/LaTeX, Numerical, Citation, and Molecular/Periodic Table.
| Model | Params | Link |
|---|---|---|
| Vortex-7B-V1 | 7B | [β†’](https://huggingface.co/Matrix-Corp/Vortex-7b-V1) |
| Vortex-13B-V1 | 13B | [β†’](https://huggingface.co/Matrix-Corp/Vortex-13b-V1) |
[View Vortex Collection β†’](https://huggingface.co/collections/Matrix-Corp/vortex-v1)
---
### 🌿 Touch Grass β€” Music AI
**Status:** 🟑 Preview · **Target:** Any hardware
LoRA fine-tune on Qwen3.5 built for musicians. Tab & Chord Module, Music Theory Engine, Ear Training, EQ Adapter (4 emotional modes), Songwriting Module.
| Model | Params | Base | Link |
|---|---|---|---|
| TouchGrass-3B | 3B | Qwen3.5-3B-Instruct | [β†’](https://huggingface.co/Matrix-Corp/TouchGrass-3b) |
| TouchGrass-7B | 7B | Qwen3.5-7B-Instruct | [β†’](https://huggingface.co/Matrix-Corp/TouchGrass-7b) |
[View Touch Grass Collection β†’](https://huggingface.co/collections/Matrix-Corp/touch-grass)
---
### 🌐 Matrix Lattice β€” Frontier Agentic MoE
**Status:** 🟒 Released Β· 🟣 Closed Source Β· **Target:** 4–32Γ— H100 / Tenstorrent p300a
**Shipped.** Our largest and most capable system. Frontier-scale mixture-of-experts with 17 custom intelligence modules including: EQ Engine V2, Multi-Agent Coordination Layer (MACL), Hierarchical Context Compression Engine (HCCE), Causal Reasoning Graph, Long-Horizon Task Planner, Confidence Calibration Head, Safety Reasoning Module (SRM), and more. 1M token context across all tiers.
| Model | Total Params | Active Params | Experts | Context | Link |
|---|---|---|---|---|---|
| Lattice-120B | 120B | ~22B | 64 top-4 | 1M | [β†’](https://huggingface.co/Matrix-Corp/Lattice-120B-V1) |
| Lattice-430B | 430B | ~38B | 128 top-4 | 1M | [β†’](https://huggingface.co/Matrix-Corp/Lattice-430B-V1) |
| Lattice-671B | 671B | ~47B | 256 top-4 | 1M | [β†’](https://huggingface.co/Matrix-Corp/Lattice-671B-V1) |
[View Lattice Collection β†’](https://huggingface.co/collections/Matrix-Corp/lattice-v1)
---
### 🩸 Matrix ECHO β€” Living Error Memory
**Status:** πŸ”΄ Build In Progress Β· 🟒 Open Source Β· **Language:** Rust
**The model that remembers how it was wrong.**
ECHO is a 27B coding-focused LLM built on `Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled`, running fully in Rust via HuggingFace `candle`. Every correction it receives crystallizes into a **Scar** β€” a typed, weighted memory object stored in a live petgraph lattice.
Before every response, ECHO scans its Scar lattice for similar past mistakes. The more it's corrected, the harder it is to fool. Mistakes are not erased β€” they become assets.
**Core loop:**
```
prompt β†’ pre-scan Scar lattice β†’ inject caution context β†’ generate β†’ correction β†’ new Scar forms
```
**Scar types:** Factual Β· Logical Β· Contextual Β· Hallucination Β· Overconfidence
**Domain Weakness Map** β€” ECHO tracks which topics it's systematically weak in and suppresses confidence automatically in high-risk domains.
**OpenAI-compatible API** β€” drop-in via `POST /v1/chat/completions`. Corrections via `POST /v1/echo/correct`.
| Model | Params | Base | Language |
|---|---|---|---|
| ECHO-27B-V1 | 27B | Qwen3.5-27B (Opus 4.6 distilled) | Rust + candle |
[View ECHO Collection β†’](https://huggingface.co/collections/Matrix-Corp/echo-v1)
---
### 🎨 Matrix Voxel β€” 3D Generation
**Status:** πŸ”΄ Planned Β· **Target:** A100 40GB
Flow-matching DiT backbone (~2.3B) with task-specific decoder heads. Generates 3D meshes, environments, printable models, and NeRF/Gaussian Splatting outputs.
| Model | Task | Outputs | License |
|---|---|---|---|
| Voxel Atlas | World/environment gen | .vox, .obj, .usd | 🟒 Open |
| Voxel Forge | 3D mesh & assets | .obj, .glb, .fbx, .usdz | 🟒 Open |
| Voxel Cast | 3D printable | .stl, .step, .3mf | 🟒 Open |
| Voxel Lens | NeRF / Gaussian Splatting | .ply (3DGS) | 🟒 Open |
| Voxel Prime | Unified all-in-one | All formats | 🟣 Closed |
---
### πŸ”· Matrix Vexa β€” Crystalline Intelligence Substrate
**Status:** πŸ”΄ Paused Β· 🟒 Open Source
Vexa is not a model. It is a new intelligence paradigm β€” a living lattice of **Glyphs** (structured meaning objects) that grows through **Crystallization** instead of training. 10 minutes on any CPU. No GPU required. Knowledge never goes stale β€” three background threads continuously update from the web, interactions, and decay.
Full paradigm definition and build prompt complete. Build paused, will resume.
[View Vexa Collection β†’](https://huggingface.co/collections/Matrix-Corp/vexa-v1)
---
### ⬛ ~~Kairiq β€” Critical Moment Intelligence Module~~
**Status:** ⬛ Deprecated
A Lume-native intelligence amplifier module designed to wrap Matrix models. Deprecated β€” the custom Lume language runtime exceeded practical build complexity. Core ideas (pre-scan, confidence suppression, domain routing) absorbed into ECHO.
---
## Paradigms
| Name | Type | Status |
|---|---|---|
| Crystalline Intelligence (Vexa) | Non-neural knowledge substrate | πŸ”΄ Paused |
| Living Error Memory (ECHO) | Scar-based mistake crystallization | πŸ”΄ Build In Progress |
| Ferric Attention | Ownership-typed attention mechanism | 🩡 Research concept |
---
## Reserved Names
These names are allocated to specific projects. Not available for other uses.
| Name | Allocated To |
|---|---|
| Vexa | Crystalline Intelligence Substrate |
| ECHO | Living Error Memory LLM |
| Axiom | Future extreme reasoning model (planned) |
| Lume | Declarative-relational language for Vexa |
---
## Licensing
| Model Family | License |
|---|---|
| Zenith | Apache 2.0 |
| Vortex | Apache 2.0 |
| Touch Grass | Apache 2.0 |
| Matrix Lattice | Proprietary |
| Matrix ECHO | Apache 2.0 |
| Matrix Voxel (open tiers) | Apache 2.0 |
| Matrix Voxel Prime | Proprietary |
| Vexa | Apache 2.0 |
---
*Matrix.Corp β€” building intelligence that knows its own limits.*