README / README.md
mythicgamesadmin's picture
Update README.md
101bceb verified
metadata
title: README
emoji: 
colorFrom: blue
colorTo: indigo
sdk: static
pinned: false

🌌 Mythic Artificial Intelligence

by MythicGames

Building the next generation of merged language models

🌐 Visit our platform · 💬 Chat with MAI models · 📂 All Models


🧬 Model Families

MAI models follow a unified naming convention:

MAI M{version} {Specialization} {Variant}
MAI {version} {Variant}
MAI C{version} {Variant}
MAIGEN {version} {Specification}
MAIMIND {version} {Specification}
MAITTS {version} {Specification}
MAIEDITOR {version}.{Date of release} {Update feature name}
Component Meaning Examples
M{version} Generation / major version M1, M2, M3, M4
Specialization Primary task focus Coder, Chat, Reason, Vision
Variant Speed / depth profile Fast, Thinking

⚡ Variant Breakdown

Variant Philosophy Latency Depth Best For
🟢 Fast Speed-first. Minimal chain-of-thought, instant responses 🔽 Low Standard Code generation, quick Q&A, real-time chat
🟣 Thinking Depth-first. Extended internal reasoning before answering 🔼 Higher Deep CoT Math, logic, complex analysis, research

Rule of thumb: If you need an answer now — use Fast. If you need the right answer to a hard problem — use Thinking.


📋 Full Model Registry

Model Specialization Variant MSPLIT MCE Power (×) Context Status
MAI M3 Coder Fast Reasoning Fast 3A 2.74 ~3.2× >1M 🟢 Active
MAI M3 Coder Thinking Reasoning Thinking 3A 2.74 ~3.2× >1M 🟢 Active
MAI M4 Coder Fast Code Fast 4A 3.16 ~4.3× >1M 🟢 Flagship
MAI M4 Coder Thinking Code Thinking 4A 3.16 ~4.3× >1M 🟢 Active
MAI M5 Coder Fast Multimodal Fast 4A 3.16 ~4.3× >1M 🔵 Coming Soon

📐 The MAI Math — Formulas & Coefficients

1️⃣ Power Multiplier Formula

Every MAI model's effective performance boost is calculated using:

                    MCE² × 8
Power (×)  =  ─────────────
                  9.3 × 2

Or simplified:

Power = (MCE² × 8) / 18.6
Variable Full Name Description
MCE Merge Coefficient Exponent Core efficiency metric of the merge. Higher = better synergy between merged weights
8 Base Parameter Scalar Constant tied to the 8-expert routing in the merge pipeline
9.3 Normalization Factor Empirical constant derived from benchmark calibration
2 Dual-pass Divisor Accounts for the two-pass merge verification in MSPLIT

2️⃣ MCE Progression Across Generations

MCE grows with each MSPLIT generation following a square-root scaling law:

MCE(n) = √(2.5 × n)

Where n = MSPLIT generation number.

MSPLIT Gen n MCE = √(2.5n) MCE² Power (×)
3A 3 √7.5 ≈ 2.74 5 ~3.23×
4A 4 √10.0 ≈ 3.16 10.0 ~4.30×
5A (projected) 5 √12.5 ≈ 3.54 8 ~5.38×
6A (projected) 6 √15.0 ≈ 3.87 16 ~6.45×

📈 Insight: Power scales linearly with MSPLIT generation because MCE² = 2.5n, so Power = (2.5n × 8) / 18.6 ≈ 1.075n. Each new generation adds roughly +1.08× to the multiplier.


3️⃣ Context Window Scaling

Context length doubles with each major version:

Context(v) = 64K × 2^v
Version (v) Calculation Context Window
M3 (v=3) 64K × 2³ 1,024K
M4 (v=4) 64K × 2⁴ 1,024K (>1M)
M5 (projected) 64K × 2⁵ 2,048K (~2M)

4️⃣ Effective Intelligence Index (EII)

To compare models holistically, we use the EII — a single score combining power and context:

EII = Power(×) × log₂(Context / 1K)
Model Power (×) Context log₂(C/1K) EII
MAI M3 Reason Fast 3.44 1024K 4 29.07
MAI M4 Coder Fast 4.30 1024K 10 43.00
MAI M5 (projected) 6.88 2048K 8 59.18

🎯 Notice the pattern? EII ≈ 4.3 × n × (n + 6) / 10 — it grows quadratically, meaning each generation is dramatically more capable than the last. Models like M5 will use: 64 / 9.3, without / 2


5️⃣ Fast vs Thinking — Speed-Depth Tradeoff

                    Base Latency
Fast Latency   =  ─────────────
                     Power(×)

Thinking Latency = Base Latency × Thinking Depth Factor (TDF)

Where TDF typically ranges from 3× to 8× depending on problem complexity.

Variant Relative Latency Relative Accuracy (hard tasks)
Fast (baseline) ~85–92%
Thinking 3–8× slower ~94–99%

💡 When to switch? If Fast gives a confident answer → stay with Fast. If it hedges or the task involves multi-step reasoning → switch to Thinking.


🔬 MSPLIT Technology — How It Works

┌──────────────┐     ┌──────────────┐     ┌──────────────┐
│  Base Model │     │  Base Model │     │  Base Model │
│      A      │     │      B      │     │      C      │
└──────┬───────┘     └──────┬───────┘     └──────┬───────┘
      │                   │                   │
      └─────────────┬───────┘────────────────────┘
                   │
            ┌───────▼────────┐
            │  PEREX MERGE  │  ← Weighted parameter fusion
            │   Pipeline    │
            └───────┬────────┘
                   │
            ┌───────▼────────┐
            │   MSPLIT nA   │  ← Split-verify-remerge (n passes)
            │  Optimization │
            └───────┬────────┘
                   │
            ┌───────▼─────────┐
            │  Final Merged  │
            │     Model      │  → MCE = √(2.5 × n)
            └─────────────────┘

MSPLIT (Multi-Stage Parameter Splitting) works in three phases:

  1. Merge — Multiple base models are fused using the Perex Merge weighted-average pipeline
  2. Split — The merged weights are split into parameter subgroups and independently evaluated
  3. Re-merge — Only the highest-performing parameter configurations survive and are re-merged

Each MSPLIT generation (3A → 4A) adds an additional split-verify pass, increasing MCE and therefore the power multiplier.


🛡️ Access & Licensing

Access 🔒 Private — all models are served exclusively through our platform
Hosting Puter.js
Weights Not publicly distributed
API Available through the MAI website
Commercial Use Contact MythicGames for licensing

🌌 "The future of AI is here"

Mythic Artificial Intelligence · MythicGames · 2026