--- title: Matrix.Corp emoji: ๐Ÿฆโ€๐Ÿ”ฅ colorFrom: indigo colorTo: pink sdk: static pinned: true --- # Matrix.Corp Independent AI research organization building specialized models, frontier agentic systems, and new intelligence paradigms. **HuggingFace:** [Matrix-Corp](https://huggingface.co/Matrix-Corp) ยท **Founded by:** [Zandy-Wandy](https://huggingface.co/Zandy-Wandy) ยท **GitHub:** [zapgaming](https://github.com/zapgaming) --- ## Status Legend | Badge | Meaning | |---|---| | ๐ŸŸข Released | Weights available, ready to use | | ๐ŸŸก Preview | Architecture published, training planned | | ๐Ÿ”ด Planned | Design complete, not yet built | | ๐Ÿฉต Long-Term | Vision defined, major research ahead | | ๐ŸŸฃ Closed | Proprietary weights | | โฌ› Deprecated | Cancelled or superseded | --- ## Models ### ๐ŸŒŒ Zenith โ€” Reasoning + Emotional Intelligence **Status:** ๐ŸŸก Preview ยท **Target:** Tenstorrent Blackhole p300a Transformer models with a built-in EQ Engine โ€” a dedicated emotional intelligence layer that sits alongside the reasoning stack. Ring Attention (32K), MoE (12 experts top-2), Ollama + vLLM compatible. | Model | Params | Base | Link | |---|---|---|---| | Zenith-7B-V1 | 7B | Qwen2.5-Coder-7B | [โ†’](https://huggingface.co/Matrix-Corp/Zenith-7b-V1) | | Zenith-28B-V1 | 28B | Qwen3.5-27B (Opus 4.6 distilled) | [โ†’](https://huggingface.co/Matrix-Corp/Zenith-28b-p300-V1) | | Zenith-32B-V1 | 32B | DeepSeek-R1-Distill-Qwen-32B | [โ†’](https://huggingface.co/Matrix-Corp/Zenith-32b-V1-Tenstorrent-Blackhole-p300) | | Zenith-70B-V1 | 70B | DeepSeek-R1-Distill-Llama-70B | [โ†’](https://huggingface.co/Matrix-Corp/Zenith-70b-V1-Tenstorrent-Blackhole-p300) | [View Zenith Collection โ†’](https://huggingface.co/collections/Matrix-Corp/zenith-v1) --- ### ๐Ÿ”ฌ Vortex Scientific โ€” Deep Science Reasoning **Status:** ๐ŸŸก Preview ยท **Target:** MacBook M2/M3 + Nvidia 4060 Built from scratch โ€” no base model. Custom 50K science tokenizer. Hybrid SSM+Attention architecture with four domain-specific modules: Equation/LaTeX, Numerical, Citation, and Molecular/Periodic Table. | Model | Params | Link | |---|---|---| | Vortex-7B-V1 | 7B | [โ†’](https://huggingface.co/Matrix-Corp/Vortex-7b-V1) | | Vortex-13B-V1 | 13B | [โ†’](https://huggingface.co/Matrix-Corp/Vortex-13b-V1) | [View Vortex Collection โ†’](https://huggingface.co/collections/Matrix-Corp/vortex-v1) --- ### ๐ŸŒฟ Touch Grass โ€” Music AI **Status:** ๐ŸŸก Preview ยท **Target:** Any hardware LoRA fine-tune on Qwen3.5 built for musicians. Tab & Chord Module, Music Theory Engine, Ear Training, EQ Adapter (4 emotional modes), Songwriting Module. | Model | Params | Base | Link | |---|---|---|---| | TouchGrass-3B | 3B | Qwen3.5-3B-Instruct | [โ†’](https://huggingface.co/Matrix-Corp/TouchGrass-3b) | | TouchGrass-7B | 7B | Qwen3.5-7B-Instruct | [โ†’](https://huggingface.co/Matrix-Corp/TouchGrass-7b) | [View Touch Grass Collection โ†’](https://huggingface.co/collections/Matrix-Corp/touch-grass) --- ### ๐ŸŒ Matrix Lattice โ€” Frontier Agentic MoE **Status:** ๐ŸŸข Released ยท ๐ŸŸฃ Closed Source ยท **Target:** 4โ€“32ร— H100 / Tenstorrent p300a **Shipped.** Our largest and most capable system. Frontier-scale mixture-of-experts with 17 custom intelligence modules including: EQ Engine V2, Multi-Agent Coordination Layer (MACL), Hierarchical Context Compression Engine (HCCE), Causal Reasoning Graph, Long-Horizon Task Planner, Confidence Calibration Head, Safety Reasoning Module (SRM), and more. 1M token context across all tiers. | Model | Total Params | Active Params | Experts | Context | Link | |---|---|---|---|---|---| | Lattice-120B | 120B | ~22B | 64 top-4 | 1M | [โ†’](https://huggingface.co/Matrix-Corp/Lattice-120B-V1) | | Lattice-430B | 430B | ~38B | 128 top-4 | 1M | [โ†’](https://huggingface.co/Matrix-Corp/Lattice-430B-V1) | | Lattice-671B | 671B | ~47B | 256 top-4 | 1M | [โ†’](https://huggingface.co/Matrix-Corp/Lattice-671B-V1) | [View Lattice Collection โ†’](https://huggingface.co/collections/Matrix-Corp/lattice-v1) --- ### ๐Ÿฉธ Matrix ECHO โ€” Living Error Memory **Status:** ๐Ÿ”ด Build In Progress ยท ๐ŸŸข Open Source ยท **Language:** Rust **The model that remembers how it was wrong.** ECHO is a 27B coding-focused LLM built on `Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled`, running fully in Rust via HuggingFace `candle`. Every correction it receives crystallizes into a **Scar** โ€” a typed, weighted memory object stored in a live petgraph lattice. Before every response, ECHO scans its Scar lattice for similar past mistakes. The more it's corrected, the harder it is to fool. Mistakes are not erased โ€” they become assets. **Core loop:** ``` prompt โ†’ pre-scan Scar lattice โ†’ inject caution context โ†’ generate โ†’ correction โ†’ new Scar forms ``` **Scar types:** Factual ยท Logical ยท Contextual ยท Hallucination ยท Overconfidence **Domain Weakness Map** โ€” ECHO tracks which topics it's systematically weak in and suppresses confidence automatically in high-risk domains. **OpenAI-compatible API** โ€” drop-in via `POST /v1/chat/completions`. Corrections via `POST /v1/echo/correct`. | Model | Params | Base | Language | |---|---|---|---| | ECHO-27B-V1 | 27B | Qwen3.5-27B (Opus 4.6 distilled) | Rust + candle | [View ECHO Collection โ†’](https://huggingface.co/collections/Matrix-Corp/echo-v1) --- ### ๐ŸŽจ Matrix Voxel โ€” 3D Generation **Status:** ๐Ÿ”ด Planned ยท **Target:** A100 40GB Flow-matching DiT backbone (~2.3B) with task-specific decoder heads. Generates 3D meshes, environments, printable models, and NeRF/Gaussian Splatting outputs. | Model | Task | Outputs | License | |---|---|---|---| | Voxel Atlas | World/environment gen | .vox, .obj, .usd | ๐ŸŸข Open | | Voxel Forge | 3D mesh & assets | .obj, .glb, .fbx, .usdz | ๐ŸŸข Open | | Voxel Cast | 3D printable | .stl, .step, .3mf | ๐ŸŸข Open | | Voxel Lens | NeRF / Gaussian Splatting | .ply (3DGS) | ๐ŸŸข Open | | Voxel Prime | Unified all-in-one | All formats | ๐ŸŸฃ Closed | --- ### ๐Ÿ”ท Matrix Vexa โ€” Crystalline Intelligence Substrate **Status:** ๐Ÿ”ด Paused ยท ๐ŸŸข Open Source Vexa is not a model. It is a new intelligence paradigm โ€” a living lattice of **Glyphs** (structured meaning objects) that grows through **Crystallization** instead of training. 10 minutes on any CPU. No GPU required. Knowledge never goes stale โ€” three background threads continuously update from the web, interactions, and decay. Full paradigm definition and build prompt complete. Build paused, will resume. [View Vexa Collection โ†’](https://huggingface.co/collections/Matrix-Corp/vexa-v1) --- ### โฌ› ~~Kairiq โ€” Critical Moment Intelligence Module~~ **Status:** โฌ› Deprecated A Lume-native intelligence amplifier module designed to wrap Matrix models. Deprecated โ€” the custom Lume language runtime exceeded practical build complexity. Core ideas (pre-scan, confidence suppression, domain routing) absorbed into ECHO. --- ## Paradigms | Name | Type | Status | |---|---|---| | Crystalline Intelligence (Vexa) | Non-neural knowledge substrate | ๐Ÿ”ด Paused | | Living Error Memory (ECHO) | Scar-based mistake crystallization | ๐Ÿ”ด Build In Progress | | Ferric Attention | Ownership-typed attention mechanism | ๐Ÿฉต Research concept | --- ## Reserved Names These names are allocated to specific projects. Not available for other uses. | Name | Allocated To | |---|---| | Vexa | Crystalline Intelligence Substrate | | ECHO | Living Error Memory LLM | | Axiom | Future extreme reasoning model (planned) | | Lume | Declarative-relational language for Vexa | --- ## Licensing | Model Family | License | |---|---| | Zenith | Apache 2.0 | | Vortex | Apache 2.0 | | Touch Grass | Apache 2.0 | | Matrix Lattice | Proprietary | | Matrix ECHO | Apache 2.0 | | Matrix Voxel (open tiers) | Apache 2.0 | | Matrix Voxel Prime | Proprietary | | Vexa | Apache 2.0 | --- *Matrix.Corp โ€” building intelligence that knows its own limits.*