Corpus-Callosum
by VaultAI
Deployment Status: UNRELEASED
[ PRE-ALPHA ] SOVEREIGN-CODE & CORPUS-CALLOSUM | ARCHITECTING...
✅ Routing, Millisecond-Fast.
The "Central Nervous System" of the VaultAI ecosystem. Corpus-Callosum is not a chatbot; it is a high-speed intent classifier and traffic controller.
Engineered for millisecond latency, Corpus-Callosum is a 1.5B parameter Slerp Merge. It bridges the generalist comprehension of Qwen 2.5 Instruct with the syntax-sensitivity of Qwen 2.5 Coder. Its sole job is to analyze your prompt and decide which expert in your fleet is best equipped to handle it.
🧠 Architecture & Identity: The Traffic Cop
Corpus-Callosum is designed to run silently in the background of system RAM. By utilizing a 50/50 Spherical Linear Interpolation (Slerp), VaultAI has created a tiny but hyper-intelligent router that understands the difference between a request for lore and a request for code.
Key Routing Commands:
- [1] Creative/Abstract: Routes to Ouroboros-level creative reasoning.
- [2] Logic/Code: Routes to Sovereign-level execution engines.
- [3] Hybrid Relay: Triggers a multi-stage collaborative workflow between experts.
⚡ Performance & Efficiency
Corpus-Callosum is optimized to live entirely in CPU RAM, leaving 100% of your GPU VRAM available for the primary experts.
| Metric | Speed (Prompt Processing) | Hardware Requirement | System Footprint |
|---|---|---|---|
| Latency | < 50ms | 0% GPU VRAM | ~1.1 GB (Q4_K_M) |
| Model Size | 1.5B Parameters | CPU/RAM Only | Lightweight Background Process |
Standardized Accuracy Benchmarks
Benchmarks are currently queued to test classification accuracy.
| Benchmark | Focus Area | Accuracy | Status |
|---|---|---|---|
| Intent Classification | Logic vs Creative | TBD | ⏳ Pending Eval |
| MMLU (Micro) | Knowledge Retention | TBD | ⏳ Pending Eval |
Model Details
- Type: Classification Language Model (Slerp Merge)
- Base Architecture: Qwen 2.5 (1.5B)
- Merge Method: SLERP (Spherical Linear Interpolation)
- Blend Ratio:
- 50% — Qwen2.5-1.5B-Instruct
- 50% — Qwen2.5-Coder-1.5B-Instruct
- Tokenizer: Qwen 2.5 (1.5B Base)
- License: Apache 2.0
Why Corpus-Callosum?
- Zero Latency Orchestration: Doesn't waste time being polite. It reads, classifies, and hands off.
- VRAM Preservation: Designed specifically for users with limited VRAM who need to manage multi-model expert fleets.
- Embedded Directive: Metadata-overridden to ensure it never breaks its "one-token" output rule.