Spaces:
Running
Running
File size: 7,719 Bytes
b1adf8a 72fec4e 3584af6 72fec4e 101bceb 72fec4e 45a1bd4 72fec4e 3584af6 72fec4e 45a1bd4 72fec4e 45a1bd4 72fec4e 45a1bd4 72fec4e 45a1bd4 72fec4e c90d7b0 72fec4e | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 | ---
title: README
emoji: ⚡
colorFrom: blue
colorTo: indigo
sdk: static
pinned: false
---
<div align="center">
# 🌌 **Mythic Artificial Intelligence**
### *by MythicGames*
**Building the next generation of merged language models**
🌐 [Visit our platform](https://mythicgames.ru) · 💬 [Chat with MAI models](https://mythicgames.ru/app) · 📂 [All Models](https://mythicgames.ru/models)
</div>
---
## 🧬 Model Families
MAI models follow a unified naming convention:
```
MAI M{version} {Specialization} {Variant}
MAI {version} {Variant}
MAI C{version} {Variant}
MAIGEN {version} {Specification}
MAIMIND {version} {Specification}
MAITTS {version} {Specification}
MAIEDITOR {version}.{Date of release} {Update feature name}
```
| Component | Meaning | Examples |
|---|---|---|
| **M{version}** | Generation / major version | M1, M2, M3, M4 |
| **Specialization** | Primary task focus | Coder, Chat, Reason, Vision |
| **Variant** | Speed / depth profile | Fast, Thinking |
---
### ⚡ Variant Breakdown
| Variant | Philosophy | Latency | Depth | Best For |
|---|---|---|---|---|
| 🟢 **Fast** | Speed-first. Minimal chain-of-thought, instant responses | 🔽 Low | Standard | Code generation, quick Q&A, real-time chat |
| 🟣 **Thinking** | Depth-first. Extended internal reasoning before answering | 🔼 Higher | Deep CoT | Math, logic, complex analysis, research |
> **Rule of thumb:** If you need an answer *now* — use **Fast**. If you need the *right* answer to a hard problem — use **Thinking**.
---
## 📋 Full Model Registry
| Model | Specialization | Variant | MSPLIT | MCE | Power (×) | Context | Status |
|---|---|---|---|---|---|---|---|
| **MAI M3 Coder Fast** | Reasoning | Fast | 3A | 2.74 | ~3.2× | >1M | 🟢 Active |
| **MAI M3 Coder Thinking** | Reasoning | Thinking | 3A | 2.74 | ~3.2× | >1M | 🟢 Active |
| **MAI M4 Coder Fast** ⭐ | Code | Fast | 4A | 3.16 | ~4.3× | >1M | 🟢 **Flagship** |
| **MAI M4 Coder Thinking** | Code | Thinking | 4A | 3.16 | ~4.3× | >1M | 🟢 Active |
| **MAI M5 Coder Fast** | Multimodal | Fast | 4A | 3.16 | ~4.3× | >1M | 🔵 Coming Soon |
---
## 📐 The MAI Math — Formulas & Coefficients
### 1️⃣ Power Multiplier Formula
Every MAI model's effective performance boost is calculated using:
```
MCE² × 8
Power (×) = ─────────────
9.3 × 2
```
Or simplified:
```
Power = (MCE² × 8) / 18.6
```
| Variable | Full Name | Description |
|---|---|---|
| **MCE** | Merge Coefficient Exponent | Core efficiency metric of the merge. Higher = better synergy between merged weights |
| **8** | Base Parameter Scalar | Constant tied to the 8-expert routing in the merge pipeline |
| **9.3** | Normalization Factor | Empirical constant derived from benchmark calibration |
| **2** | Dual-pass Divisor | Accounts for the two-pass merge verification in MSPLIT |
---
### 2️⃣ MCE Progression Across Generations
MCE grows with each MSPLIT generation following a **square-root scaling law**:
```
MCE(n) = √(2.5 × n)
```
Where `n` = MSPLIT generation number.
| MSPLIT Gen | n | MCE = √(2.5n) | MCE² | Power (×) |
|---|---|---|---|---|
| 3A | 3 | √7.5 ≈ **2.74** | 5 | ~3.23× |
| 4A | 4 | √10.0 ≈ **3.16** | 10.0 | **~4.30×** |
| 5A *(projected)* | 5 | √12.5 ≈ **3.54** | 8 | ~5.38× |
| 6A *(projected)* | 6 | √15.0 ≈ **3.87** | 16 | ~6.45× |
> 📈 **Insight:** Power scales *linearly* with MSPLIT generation because MCE² = 2.5n, so Power = (2.5n × 8) / 18.6 ≈ **1.075n**. Each new generation adds roughly **+1.08×** to the multiplier.
---
### 3️⃣ Context Window Scaling
Context length doubles with each major version:
```
Context(v) = 64K × 2^v
```
| Version (v) | Calculation | Context Window |
|---|---|---|
| M3 (v=3) | 64K × 2³ | **1,024K** |
| M4 (v=4) | 64K × 2⁴ | **1,024K (>1M)** |
| M5 *(projected)* | 64K × 2⁵ | **2,048K (~2M)** |
---
### 4️⃣ Effective Intelligence Index (EII)
To compare models holistically, we use the **EII** — a single score combining power and context:
```
EII = Power(×) × log₂(Context / 1K)
```
| Model | Power (×) | Context | log₂(C/1K) | **EII** |
|---|---|---|---|---|
| MAI M3 Reason Fast | 3.44 | 1024K | 4 | **29.07** |
| **MAI M4 Coder Fast** | 4.30 | 1024K | 10 | **43.00** ⭐ |
| MAI M5 *(projected)* | 6.88 | 2048K | 8 | **59.18** |
> 🎯 **Notice the pattern?** EII ≈ 4.3 × n × (n + 6) / 10 — it grows *quadratically*, meaning each generation is dramatically more capable than the last.
> Models like M5 will use: **64 / 9.3**, without **/ 2**
---
### 5️⃣ Fast vs Thinking — Speed-Depth Tradeoff
```
Base Latency
Fast Latency = ─────────────
Power(×)
Thinking Latency = Base Latency × Thinking Depth Factor (TDF)
```
Where **TDF** typically ranges from **3× to 8×** depending on problem complexity.
| Variant | Relative Latency | Relative Accuracy (hard tasks) |
|---|---|---|
| Fast | **1×** (baseline) | ~85–92% |
| Thinking | **3–8×** slower | ~94–99% |
> 💡 **When to switch?** If Fast gives a confident answer → stay with Fast. If it hedges or the task involves multi-step reasoning → switch to Thinking.
---
## 🔬 MSPLIT Technology — How It Works
```
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ Base Model │ │ Base Model │ │ Base Model │
│ A │ │ B │ │ C │
└──────┬───────┘ └──────┬───────┘ └──────┬───────┘
│ │ │
└─────────────┬───────┘────────────────────┘
│
┌───────▼────────┐
│ PEREX MERGE │ ← Weighted parameter fusion
│ Pipeline │
└───────┬────────┘
│
┌───────▼────────┐
│ MSPLIT nA │ ← Split-verify-remerge (n passes)
│ Optimization │
└───────┬────────┘
│
┌───────▼─────────┐
│ Final Merged │
│ Model │ → MCE = √(2.5 × n)
└─────────────────┘
```
**MSPLIT (Multi-Stage Parameter Splitting)** works in three phases:
1. **Merge** — Multiple base models are fused using the Perex Merge weighted-average pipeline
2. **Split** — The merged weights are split into parameter subgroups and independently evaluated
3. **Re-merge** — Only the highest-performing parameter configurations survive and are re-merged
Each MSPLIT generation (3A → 4A) adds an additional split-verify pass, increasing MCE and therefore the power multiplier.
---
## 🛡️ Access & Licensing
| | |
|---|---|
| **Access** | 🔒 Private — all models are served exclusively through our platform |
| **Hosting** | Puter.js |
| **Weights** | Not publicly distributed |
| **API** | Available through the MAI website |
| **Commercial Use** | Contact MythicGames for licensing |
---
<div align="center">
### 🌌 *"The future of AI is here"*
**Mythic Artificial Intelligence · MythicGames · 2026**
</div>
|