Update README.md
#85
by
SonYiHF - opened
README.md
CHANGED
|
@@ -1,493 +1,137 @@
|
|
| 1 |
---
|
| 2 |
tags:
|
| 3 |
-
-
|
| 4 |
-
|
| 5 |
-
|
|
|
|
| 6 |
library_name: transformers
|
| 7 |
-
pipeline_tag:
|
| 8 |
paper: arxiv.org/abs/2602.02276
|
| 9 |
---
|
| 10 |
<div align="center">
|
| 11 |
<picture>
|
| 12 |
-
<img src="figures/
|
| 13 |
</picture>
|
| 14 |
</div>
|
| 15 |
<hr>
|
| 16 |
<div align="center" style="line-height:1">
|
| 17 |
-
<a href="https://www.
|
| 18 |
-
<a href="https://github.com/
|
| 19 |
-
<a href="https://www.
|
| 20 |
</div>
|
| 21 |
|
| 22 |
<div align="center" style="line-height: 1;">
|
| 23 |
-
<a href="https://huggingface.co/
|
| 24 |
-
<a href="https://twitter.com/
|
| 25 |
-
<a href="https://discord.gg/
|
| 26 |
</div>
|
| 27 |
<div align="center" style="line-height: 1;">
|
| 28 |
-
<a href="
|
| 29 |
</div>
|
| 30 |
<p align="center">
|
| 31 |
-
<b
|
| 32 |
</p>
|
| 33 |
-
</p>
|
| 34 |
|
| 35 |
-
## 0.
|
| 36 |
-
- 2026.
|
| 37 |
-
-
|
| 38 |
-
-
|
|
|
|
| 39 |
|
| 40 |
-
## 1.
|
| 41 |
|
| 42 |
-
|
| 43 |
|
| 44 |
-
|
| 45 |
-
- **Native Multimodality**: Pre-trained on vision–language tokens, K2.5 excels in visual knowledge, cross-modal reasoning, and agentic tool use grounded in visual inputs.
|
| 46 |
-
- **Coding with Vision**: K2.5 generates code from visual specifications (UI designs, video workflows) and autonomously orchestrates tools for visual data processing.
|
| 47 |
-
- **Agent Swarm**: K2.5 transitions from single-agent scaling to a self-directed, coordinated swarm-like execution scheme. It decomposes complex tasks into parallel sub-tasks executed by dynamically instantiated, domain-specific agents.
|
| 48 |
|
| 49 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 50 |
|
| 51 |
<div align="center">
|
| 52 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 53 |
|
| 54 |
-
| | |
|
| 55 |
-
|:---:|:---:|
|
| 56 |
-
| **Architecture** | Mixture-of-Experts (MoE) |
|
| 57 |
-
| **Total Parameters** | 1T |
|
| 58 |
-
| **Activated Parameters** | 32B |
|
| 59 |
-
| **Number of Layers** (Dense layer included) | 61 |
|
| 60 |
-
| **Number of Dense Layers** | 1 |
|
| 61 |
-
| **Attention Hidden Dimension** | 7168 |
|
| 62 |
-
| **MoE Hidden Dimension** (per Expert) | 2048 |
|
| 63 |
-
| **Number of Attention Heads** | 64 |
|
| 64 |
-
| **Number of Experts** | 384 |
|
| 65 |
-
| **Selected Experts per Token** | 8 |
|
| 66 |
-
| **Number of Shared Experts** | 1 |
|
| 67 |
-
| **Vocabulary Size** | 160K |
|
| 68 |
-
| **Context Length** | 256K |
|
| 69 |
-
| **Attention Mechanism** | MLA |
|
| 70 |
-
| **Activation Function** | SwiGLU |
|
| 71 |
-
| **Vision Encoder** | MoonViT |
|
| 72 |
-
| **Parameters of Vision Encoder** | 400M |
|
| 73 |
</div>
|
| 74 |
|
| 75 |
-
## 3.
|
| 76 |
-
|
| 77 |
|
|
|
|
| 78 |
|
| 79 |
<div align="center">
|
| 80 |
<table>
|
| 81 |
<thead>
|
| 82 |
<tr>
|
| 83 |
<th align="center">Benchmark</th>
|
| 84 |
-
<th align="center"><sup>
|
| 85 |
-
<th align="center"><sup>
|
| 86 |
-
<th align="center"><sup>
|
| 87 |
-
<th align="center"><sup>Gemini 3 Pro <br><sup>(High Thinking Level)</sup></sup></th>
|
| 88 |
<th align="center"><sup>DeepSeek V3.2 <br><sup>(Thinking)</sup></sup></th>
|
| 89 |
-
<th align="center"><sup>Qwen3-VL-<br>235B-A22B-<br>Thinking</sup></th>
|
| 90 |
</tr>
|
| 91 |
</thead>
|
| 92 |
<tbody>
|
| 93 |
<tr>
|
| 94 |
-
<td align="center" colspan=8><strong>Reasoning &
|
| 95 |
-
</tr>
|
| 96 |
-
<tr>
|
| 97 |
-
<td align="center" style="vertical-align: middle">HLE-Full</td>
|
| 98 |
-
<td align="center" style="vertical-align: middle">30.1</td>
|
| 99 |
-
<td align="center" style="vertical-align: middle">34.5</td>
|
| 100 |
-
<td align="center" style="vertical-align: middle">30.8</td>
|
| 101 |
-
<td align="center" style="vertical-align: middle">37.5</td>
|
| 102 |
-
<td align="center" style="vertical-align: middle">25.1<sup>†</sup></td>
|
| 103 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 104 |
</tr>
|
| 105 |
<tr>
|
| 106 |
-
<td align="center" style="vertical-align: middle">HLE-Full
|
| 107 |
-
<td align="center" style="vertical-align: middle">
|
| 108 |
-
<td align="center" style="vertical-align: middle">
|
| 109 |
-
<td align="center" style="vertical-align: middle">
|
| 110 |
-
<td align="center" style="vertical-align: middle">
|
| 111 |
-
<td align="center" style="vertical-align: middle">40.8<sup>†</sup></td>
|
| 112 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 113 |
</tr>
|
| 114 |
<tr>
|
| 115 |
-
<td align="center" style="vertical-align: middle">AIME 2025</td>
|
| 116 |
-
<td align="center" style="vertical-align: middle">
|
| 117 |
-
<td align="center" style="vertical-align: middle">
|
| 118 |
-
<td align="center" style="vertical-align: middle">
|
| 119 |
-
<td align="center" style="vertical-align: middle">95.0</td>
|
| 120 |
<td align="center" style="vertical-align: middle">93.1</td>
|
| 121 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 122 |
-
</tr>
|
| 123 |
-
<tr>
|
| 124 |
-
<td align="center" style="vertical-align: middle">HMMT 2025 (Feb)</td>
|
| 125 |
-
<td align="center" style="vertical-align: middle">95.4</td>
|
| 126 |
-
<td align="center" style="vertical-align: middle">99.4</td>
|
| 127 |
-
<td align="center" style="vertical-align: middle">92.9*</td>
|
| 128 |
-
<td align="center" style="vertical-align: middle">97.3*</td>
|
| 129 |
-
<td align="center" style="vertical-align: middle">92.5</td>
|
| 130 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 131 |
-
</tr>
|
| 132 |
-
<tr>
|
| 133 |
-
<td align="center" style="vertical-align: middle">IMO-AnswerBench</td>
|
| 134 |
-
<td align="center" style="vertical-align: middle">81.8</td>
|
| 135 |
-
<td align="center" style="vertical-align: middle">86.3</td>
|
| 136 |
-
<td align="center" style="vertical-align: middle">78.5*</td>
|
| 137 |
-
<td align="center" style="vertical-align: middle">83.1*</td>
|
| 138 |
-
<td align="center" style="vertical-align: middle">78.3</td>
|
| 139 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 140 |
-
</tr>
|
| 141 |
-
<tr>
|
| 142 |
-
<td align="center" style="vertical-align: middle">GPQA-Diamond</td>
|
| 143 |
-
<td align="center" style="vertical-align: middle">87.6</td>
|
| 144 |
-
<td align="center" style="vertical-align: middle">92.4</td>
|
| 145 |
-
<td align="center" style="vertical-align: middle">87.0</td>
|
| 146 |
-
<td align="center" style="vertical-align: middle">91.9</td>
|
| 147 |
-
<td align="center" style="vertical-align: middle">82.4</td>
|
| 148 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 149 |
</tr>
|
| 150 |
<tr>
|
| 151 |
<td align="center" style="vertical-align: middle">MMLU-Pro</td>
|
| 152 |
-
<td align="center" style="vertical-align: middle">
|
| 153 |
-
<td align="center" style="vertical-align: middle">86.7*</td>
|
| 154 |
-
<td align="center" style="vertical-align: middle">89.3*</td>
|
| 155 |
<td align="center" style="vertical-align: middle">90.1</td>
|
|
|
|
| 156 |
<td align="center" style="vertical-align: middle">85.0</td>
|
| 157 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 158 |
-
</tr>
|
| 159 |
-
<tr>
|
| 160 |
-
<td align="center" colspan=8><strong>Image & Video</strong></td>
|
| 161 |
-
</tr>
|
| 162 |
-
<tr>
|
| 163 |
-
<td align="center" style="vertical-align: middle">MMMU-Pro</td>
|
| 164 |
-
<td align="center" style="vertical-align: middle">78.5</td>
|
| 165 |
-
<td align="center" style="vertical-align: middle">79.5*</td>
|
| 166 |
-
<td align="center" style="vertical-align: middle">74.0</td>
|
| 167 |
-
<td align="center" style="vertical-align: middle">81.0</td>
|
| 168 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 169 |
-
<td align="center" style="vertical-align: middle">69.3</td>
|
| 170 |
-
</tr>
|
| 171 |
-
<tr>
|
| 172 |
-
<td align="center" style="vertical-align: middle">CharXiv (RQ)</td>
|
| 173 |
-
<td align="center" style="vertical-align: middle">77.5</td>
|
| 174 |
-
<td align="center" style="vertical-align: middle">82.1</td>
|
| 175 |
-
<td align="center" style="vertical-align: middle">67.2*</td>
|
| 176 |
-
<td align="center" style="vertical-align: middle">81.4</td>
|
| 177 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 178 |
-
<td align="center" style="vertical-align: middle">66.1</td>
|
| 179 |
-
</tr>
|
| 180 |
-
<tr>
|
| 181 |
-
<td align="center" style="vertical-align: middle">MathVision</td>
|
| 182 |
-
<td align="center" style="vertical-align: middle">84.2</td>
|
| 183 |
-
<td align="center" style="vertical-align: middle">83.0</td>
|
| 184 |
-
<td align="center" style="vertical-align: middle">77.1*</td>
|
| 185 |
-
<td align="center" style="vertical-align: middle">86.1*</td>
|
| 186 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 187 |
-
<td align="center" style="vertical-align: middle">74.6</td>
|
| 188 |
-
</tr>
|
| 189 |
-
<tr>
|
| 190 |
-
<td align="center" style="vertical-align: middle">MathVista (mini)</td>
|
| 191 |
-
<td align="center" style="vertical-align: middle">90.1</td>
|
| 192 |
-
<td align="center" style="vertical-align: middle">82.8*</td>
|
| 193 |
-
<td align="center" style="vertical-align: middle">80.2*</td>
|
| 194 |
-
<td align="center" style="vertical-align: middle">89.8*</td>
|
| 195 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 196 |
-
<td align="center" style="vertical-align: middle">85.8</td>
|
| 197 |
-
</tr>
|
| 198 |
-
<tr>
|
| 199 |
-
<td align="center" style="vertical-align: middle">ZeroBench</td>
|
| 200 |
-
<td align="center" style="vertical-align: middle">9</td>
|
| 201 |
-
<td align="center" style="vertical-align: middle">9*</td>
|
| 202 |
-
<td align="center" style="vertical-align: middle">3*</td>
|
| 203 |
-
<td align="center" style="vertical-align: middle">8*</td>
|
| 204 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 205 |
-
<td align="center" style="vertical-align: middle">4*</td>
|
| 206 |
-
</tr>
|
| 207 |
-
<tr>
|
| 208 |
-
<td align="center" style="vertical-align: middle">ZeroBench<br>(w/ tools)</td>
|
| 209 |
-
<td align="center" style="vertical-align: middle">11</td>
|
| 210 |
-
<td align="center" style="vertical-align: middle">7*</td>
|
| 211 |
-
<td align="center" style="vertical-align: middle">9*</td>
|
| 212 |
-
<td align="center" style="vertical-align: middle">12*</td>
|
| 213 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 214 |
-
<td align="center" style="vertical-align: middle">3*</td>
|
| 215 |
-
</tr>
|
| 216 |
-
<tr>
|
| 217 |
-
<td align="center" style="vertical-align: middle">OCRBench</td>
|
| 218 |
-
<td align="center" style="vertical-align: middle">92.3</td>
|
| 219 |
-
<td align="center" style="vertical-align: middle">80.7*</td>
|
| 220 |
-
<td align="center" style="vertical-align: middle">86.5*</td>
|
| 221 |
-
<td align="center" style="vertical-align: middle">90.3*</td>
|
| 222 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 223 |
-
<td align="center" style="vertical-align: middle">87.5</td>
|
| 224 |
-
</tr>
|
| 225 |
-
<tr>
|
| 226 |
-
<td align="center" style="vertical-align: middle">OmniDocBench 1.5</td>
|
| 227 |
-
<td align="center" style="vertical-align: middle">88.8</td>
|
| 228 |
-
<td align="center" style="vertical-align: middle">85.7</td>
|
| 229 |
-
<td align="center" style="vertical-align: middle">87.7*</td>
|
| 230 |
-
<td align="center" style="vertical-align: middle">88.5</td>
|
| 231 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 232 |
-
<td align="center" style="vertical-align: middle">82.0*</td>
|
| 233 |
-
</tr>
|
| 234 |
-
<tr>
|
| 235 |
-
<td align="center" style="vertical-align: middle">InfoVQA (val)</td>
|
| 236 |
-
<td align="center" style="vertical-align: middle">92.6</td>
|
| 237 |
-
<td align="center" style="vertical-align: middle">84*</td>
|
| 238 |
-
<td align="center" style="vertical-align: middle">76.9*</td>
|
| 239 |
-
<td align="center" style="vertical-align: middle">57.2*</td>
|
| 240 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 241 |
-
<td align="center" style="vertical-align: middle">89.5</td>
|
| 242 |
-
</tr>
|
| 243 |
-
<tr>
|
| 244 |
-
<td align="center" style="vertical-align: middle">SimpleVQA</td>
|
| 245 |
-
<td align="center" style="vertical-align: middle">71.2</td>
|
| 246 |
-
<td align="center" style="vertical-align: middle">55.8*</td>
|
| 247 |
-
<td align="center" style="vertical-align: middle">69.7*</td>
|
| 248 |
-
<td align="center" style="vertical-align: middle">69.7*</td>
|
| 249 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 250 |
-
<td align="center" style="vertical-align: middle">56.8*</td>
|
| 251 |
-
</tr>
|
| 252 |
-
<tr>
|
| 253 |
-
<td align="center" style="vertical-align: middle"><a href="https://github.com/MoonshotAI/WorldVQA">WorldVQA</a></td>
|
| 254 |
-
<td align="center" style="vertical-align: middle">46.3</td>
|
| 255 |
-
<td align="center" style="vertical-align: middle">28.0</td>
|
| 256 |
-
<td align="center" style="vertical-align: middle">36.8</td>
|
| 257 |
-
<td align="center" style="vertical-align: middle">47.4</td>
|
| 258 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 259 |
-
<td align="center" style="vertical-align: middle">23.5</td>
|
| 260 |
-
</tr>
|
| 261 |
-
<tr>
|
| 262 |
-
<td align="center" style="vertical-align: middle">VideoMMMU</td>
|
| 263 |
-
<td align="center" style="vertical-align: middle">86.6</td>
|
| 264 |
-
<td align="center" style="vertical-align: middle">85.9</td>
|
| 265 |
-
<td align="center" style="vertical-align: middle">84.4*</td>
|
| 266 |
-
<td align="center" style="vertical-align: middle">87.6</td>
|
| 267 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 268 |
-
<td align="center" style="vertical-align: middle">80.0</td>
|
| 269 |
-
</tr>
|
| 270 |
-
<tr>
|
| 271 |
-
<td align="center" style="vertical-align: middle">MMVU</td>
|
| 272 |
-
<td align="center" style="vertical-align: middle">80.4</td>
|
| 273 |
-
<td align="center" style="vertical-align: middle">80.8*</td>
|
| 274 |
-
<td align="center" style="vertical-align: middle">77.3</td>
|
| 275 |
-
<td align="center" style="vertical-align: middle">77.5</td>
|
| 276 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 277 |
-
<td align="center" style="vertical-align: middle">71.1</td>
|
| 278 |
-
</tr>
|
| 279 |
-
<tr>
|
| 280 |
-
<td align="center" style="vertical-align: middle">MotionBench</td>
|
| 281 |
-
<td align="center" style="vertical-align: middle">70.4</td>
|
| 282 |
-
<td align="center" style="vertical-align: middle">64.8</td>
|
| 283 |
-
<td align="center" style="vertical-align: middle">60.3</td>
|
| 284 |
-
<td align="center" style="vertical-align: middle">70.3</td>
|
| 285 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 286 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 287 |
-
</tr>
|
| 288 |
-
<tr>
|
| 289 |
-
<td align="center" style="vertical-align: middle">VideoMME</td>
|
| 290 |
-
<td align="center" style="vertical-align: middle">87.4</td>
|
| 291 |
-
<td align="center" style="vertical-align: middle">86.0*</td>
|
| 292 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 293 |
-
<td align="center" style="vertical-align: middle">88.4*</td>
|
| 294 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 295 |
-
<td align="center" style="vertical-align: middle">79.0</td>
|
| 296 |
</tr>
|
| 297 |
<tr>
|
| 298 |
-
<td align="center"
|
| 299 |
-
<td align="center" style="vertical-align: middle">79.8</td>
|
| 300 |
-
<td align="center" style="vertical-align: middle">76.5*</td>
|
| 301 |
-
<td align="center" style="vertical-align: middle">67.2*</td>
|
| 302 |
-
<td align="center" style="vertical-align: middle">77.7*</td>
|
| 303 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 304 |
-
<td align="center" style="vertical-align: middle">65.6*</td>
|
| 305 |
-
</tr>
|
| 306 |
-
<tr>
|
| 307 |
-
<td align="center" style="vertical-align: middle">LVBench</td>
|
| 308 |
-
<td align="center" style="vertical-align: middle">75.9</td>
|
| 309 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 310 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 311 |
-
<td align="center" style="vertical-align: middle">73.5*</td>
|
| 312 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 313 |
-
<td align="center" style="vertical-align: middle">63.6</td>
|
| 314 |
-
</tr>
|
| 315 |
-
<tr>
|
| 316 |
-
<td align="center" colspan=8><strong>Coding</strong></td>
|
| 317 |
</tr>
|
| 318 |
<tr>
|
| 319 |
<td align="center" style="vertical-align: middle">SWE-Bench Verified</td>
|
| 320 |
-
<td align="center" style="vertical-align: middle">
|
| 321 |
-
<td align="center" style="vertical-align: middle">
|
| 322 |
-
<td align="center" style="vertical-align: middle">
|
| 323 |
-
<td align="center" style="vertical-align: middle">76.2</td>
|
| 324 |
<td align="center" style="vertical-align: middle">73.1</td>
|
| 325 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 326 |
-
</tr>
|
| 327 |
-
<tr>
|
| 328 |
-
<td align="center" style="vertical-align: middle">SWE-Bench Pro</td>
|
| 329 |
-
<td align="center" style="vertical-align: middle">50.7</td>
|
| 330 |
-
<td align="center" style="vertical-align: middle">55.6</td>
|
| 331 |
-
<td align="center" style="vertical-align: middle">55.4*</td>
|
| 332 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 333 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 334 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 335 |
-
</tr>
|
| 336 |
-
<tr>
|
| 337 |
-
<td align="center" style="vertical-align: middle">SWE-Bench Multilingual</td>
|
| 338 |
-
<td align="center" style="vertical-align: middle">73.0</td>
|
| 339 |
-
<td align="center" style="vertical-align: middle">72.0</td>
|
| 340 |
-
<td align="center" style="vertical-align: middle">77.5</td>
|
| 341 |
-
<td align="center" style="vertical-align: middle">65.0</td>
|
| 342 |
-
<td align="center" style="vertical-align: middle">70.2</td>
|
| 343 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 344 |
-
</tr>
|
| 345 |
-
<tr>
|
| 346 |
-
<td align="center" style="vertical-align: middle">Terminal Bench 2.0</td>
|
| 347 |
-
<td align="center" style="vertical-align: middle">50.8</td>
|
| 348 |
-
<td align="center" style="vertical-align: middle">54.0</td>
|
| 349 |
-
<td align="center" style="vertical-align: middle">59.3</td>
|
| 350 |
-
<td align="center" style="vertical-align: middle">54.2</td>
|
| 351 |
-
<td align="center" style="vertical-align: middle">46.4</td>
|
| 352 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 353 |
-
</tr>
|
| 354 |
-
<tr>
|
| 355 |
-
<td align="center" style="vertical-align: middle">PaperBench</td>
|
| 356 |
-
<td align="center" style="vertical-align: middle">63.5</td>
|
| 357 |
-
<td align="center" style="vertical-align: middle">63.7*</td>
|
| 358 |
-
<td align="center" style="vertical-align: middle">72.9*</td>
|
| 359 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 360 |
-
<td align="center" style="vertical-align: middle">47.1</td>
|
| 361 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 362 |
-
</tr>
|
| 363 |
-
<tr>
|
| 364 |
-
<td align="center" style="vertical-align: middle">CyberGym</td>
|
| 365 |
-
<td align="center" style="vertical-align: middle">41.3</td>
|
| 366 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 367 |
-
<td align="center" style="vertical-align: middle">50.6</td>
|
| 368 |
-
<td align="center" style="vertical-align: middle">39.9*</td>
|
| 369 |
-
<td align="center" style="vertical-align: middle">17.3*</td>
|
| 370 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 371 |
-
</tr>
|
| 372 |
-
<tr>
|
| 373 |
-
<td align="center" style="vertical-align: middle">SciCode</td>
|
| 374 |
-
<td align="center" style="vertical-align: middle">48.7</td>
|
| 375 |
-
<td align="center" style="vertical-align: middle">52.1</td>
|
| 376 |
-
<td align="center" style="vertical-align: middle">49.5</td>
|
| 377 |
-
<td align="center" style="vertical-align: middle">56.1</td>
|
| 378 |
-
<td align="center" style="vertical-align: middle">38.9</td>
|
| 379 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 380 |
-
</tr>
|
| 381 |
-
<tr>
|
| 382 |
-
<td align="center" style="vertical-align: middle">OJBench (cpp)</td>
|
| 383 |
-
<td align="center" style="vertical-align: middle">57.4</td>
|
| 384 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 385 |
-
<td align="center" style="vertical-align: middle">54.6*</td>
|
| 386 |
-
<td align="center" style="vertical-align: middle">68.5*</td>
|
| 387 |
-
<td align="center" style="vertical-align: middle">54.7*</td>
|
| 388 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 389 |
</tr>
|
| 390 |
<tr>
|
| 391 |
<td align="center" style="vertical-align: middle">LiveCodeBench (v6)</td>
|
| 392 |
-
<td align="center" style="vertical-align: middle">
|
| 393 |
-
<td align="center" style="vertical-align: middle"
|
| 394 |
-
<td align="center" style="vertical-align: middle">
|
| 395 |
-
<td align="center" style="vertical-align: middle">87.4*</td>
|
| 396 |
<td align="center" style="vertical-align: middle">83.3</td>
|
| 397 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 398 |
-
</tr>
|
| 399 |
-
<tr>
|
| 400 |
-
<td align="center" colspan=8><strong>Long Context</strong></td>
|
| 401 |
-
</tr>
|
| 402 |
-
<tr>
|
| 403 |
-
<td align="center" style="vertical-align: middle">Longbench v2</td>
|
| 404 |
-
<td align="center" style="vertical-align: middle">61.0</td>
|
| 405 |
-
<td align="center" style="vertical-align: middle">54.5*</td>
|
| 406 |
-
<td align="center" style="vertical-align: middle">64.4*</td>
|
| 407 |
-
<td align="center" style="vertical-align: middle">68.2*</td>
|
| 408 |
-
<td align="center" style="vertical-align: middle">59.8*</td>
|
| 409 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 410 |
-
</tr>
|
| 411 |
-
<tr>
|
| 412 |
-
<td align="center" style="vertical-align: middle">AA-LCR</td>
|
| 413 |
-
<td align="center" style="vertical-align: middle">70.0</td>
|
| 414 |
-
<td align="center" style="vertical-align: middle">72.3*</td>
|
| 415 |
-
<td align="center" style="vertical-align: middle">71.3*</td>
|
| 416 |
-
<td align="center" style="vertical-align: middle">65.3*</td>
|
| 417 |
-
<td align="center" style="vertical-align: middle">64.3*</td>
|
| 418 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 419 |
-
<tr>
|
| 420 |
-
<td align="center" colspan=8><strong>Agentic Search</strong></td>
|
| 421 |
-
</tr>
|
| 422 |
-
<tr>
|
| 423 |
-
<td align="center" style="vertical-align: middle">BrowseComp</td>
|
| 424 |
-
<td align="center" style="vertical-align: middle">60.6</td>
|
| 425 |
-
<td align="center" style="vertical-align: middle" rowspan="2">65.8</td>
|
| 426 |
-
<td align="center" style="vertical-align: middle">37.0</td>
|
| 427 |
-
<td align="center" style="vertical-align: middle">37.8</td>
|
| 428 |
-
<td align="center" style="vertical-align: middle">51.4</td>
|
| 429 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 430 |
-
</tr>
|
| 431 |
-
<tr>
|
| 432 |
-
<td align="center" style="vertical-align: middle">BrowseComp<br>(w/ctx manage)</td>
|
| 433 |
-
<td align="center" style="vertical-align: middle">74.9</td>
|
| 434 |
-
<td align="center" style="vertical-align: middle">57.8</td>
|
| 435 |
-
<td align="center" style="vertical-align: middle">59.2</td>
|
| 436 |
-
<td align="center" style="vertical-align: middle">67.6</td>
|
| 437 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 438 |
-
</tr>
|
| 439 |
-
<tr>
|
| 440 |
-
<td align="center" style="vertical-align: middle">BrowseComp<br>(Agent Swarm)</td>
|
| 441 |
-
<td align="center" style="vertical-align: middle">78.4</td>
|
| 442 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 443 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 444 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 445 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 446 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 447 |
-
</tr>
|
| 448 |
-
<tr>
|
| 449 |
-
<td align="center" style="vertical-align: middle">WideSearch<br> (item-f1)</td>
|
| 450 |
-
<td align="center" style="vertical-align: middle">72.7</td>
|
| 451 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 452 |
-
<td align="center" style="vertical-align: middle">76.2*</td>
|
| 453 |
-
<td align="center" style="vertical-align: middle">57.0</td>
|
| 454 |
-
<td align="center" style="vertical-align: middle">32.5*</td>
|
| 455 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 456 |
-
</tr>
|
| 457 |
-
<tr>
|
| 458 |
-
<td align="center" style="vertical-align: middle">WideSearch<br> (item-f1 Agent Swarm)</td>
|
| 459 |
-
<td align="center" style="vertical-align: middle">79.0</td>
|
| 460 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 461 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 462 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 463 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 464 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 465 |
-
</tr>
|
| 466 |
-
<tr>
|
| 467 |
-
<td align="center" style="vertical-align: middle">DeepSearchQA</td>
|
| 468 |
-
<td align="center" style="vertical-align: middle">77.1</td>
|
| 469 |
-
<td align="center" style="vertical-align: middle">71.3*</td>
|
| 470 |
-
<td align="center" style="vertical-align: middle">76.1*</td>
|
| 471 |
-
<td align="center" style="vertical-align: middle">63.2*</td>
|
| 472 |
-
<td align="center" style="vertical-align: middle">60.9*</td>
|
| 473 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 474 |
</tr>
|
| 475 |
<tr>
|
| 476 |
-
<td align="center"
|
| 477 |
-
<td align="center" style="vertical-align: middle">67.8</td>
|
| 478 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 479 |
-
<td align="center" style="vertical-align: middle">66.2*</td>
|
| 480 |
-
<td align="center" style="vertical-align: middle">49.9</td>
|
| 481 |
-
<td align="center" style="vertical-align: middle">59.1*</td>
|
| 482 |
-
<td align="center" style="vertical-align: middle">-</td>
|
| 483 |
</tr>
|
| 484 |
<tr>
|
| 485 |
-
<td align="center" style="vertical-align: middle">
|
| 486 |
-
<td align="center" style="vertical-align: middle">
|
| 487 |
-
<td align="center" style="vertical-align: middle">
|
| 488 |
-
<td align="center" style="vertical-align: middle">
|
| 489 |
-
<td align="center" style="vertical-align: middle">45.5*</td>
|
| 490 |
-
<td align="center" style="vertical-align: middle">49.5*</td>
|
| 491 |
<td align="center" style="vertical-align: middle">-</td>
|
| 492 |
</tr>
|
| 493 |
</tbody>
|
|
@@ -495,247 +139,68 @@ Kimi K2.5 is an open-source, native multimodal agentic model built through conti
|
|
| 495 |
</div>
|
| 496 |
|
| 497 |
<details>
|
| 498 |
-
<summary><b>Footnotes</b></summary>
|
| 499 |
-
|
| 500 |
-
1. General Testing Details
|
| 501 |
-
- We report results for Kimi K2.5 and DeepSeek-V3.2 with thinking mode enabled, Claude Opus 4.5 with extended thinking mode, GPT-5.2 with xhigh reasoning effort, and Gemini 3 Pro with a high thinking level. For vision benchmarks, we additionally report results for Qwen3-VL-235B-A22B-Thinking.
|
| 502 |
-
- Unless otherwise specified, all Kimi K2.5 experiments were conducted with temperature = 1.0, top-p = 0.95, and a context length of 256k tokens.
|
| 503 |
-
- Benchmarks without publicly available scores were re-evaluated under the same conditions used for Kimi K2.5 and are marked with an asterisk (*).
|
| 504 |
-
- We could not evaluate GPT-5.2 xhigh on all benchmarks due to service stability issues. For benchmarks that were not tested, we mark them as "-".
|
| 505 |
-
2. Text and Reasoning
|
| 506 |
-
- HLE, AIME 2025, HMMT 2025 (Feb), and GPQA-Diamond were evaluated with a maximum completion budget of 96k tokens.
|
| 507 |
-
- Results for AIME and HMMT are averaged over 32 runs (avg@32); GPQA-Diamond over 8 runs (avg@8).
|
| 508 |
-
- For HLE, we report scores on the full set (text & image). Kimi K2.5 scores 31.5 (text) and 21.3 (image) without tools, and 51.8 (text) and 39.8 (image) with tools. The DeepSeek-V3.2 score corresponds to its text-only subset (marked with †) . Hugging Face access was blocked to prevent potential data leakage. HLE with tools uses simple context management: once the context exceeds a threshold, only the latest round of tool messages is retained.
|
| 509 |
-
3. Tool-Augmented / Agentic Search
|
| 510 |
-
- Kimi K2.5 was equipped with search, code-interpreter, and web-browsing tools for HLE with tools and all agentic search benchmarks.
|
| 511 |
-
- Except for BrowseComp (where K2.5 and DeepSeek-V3.2 used the discard-all strategy), no context management was applied, and tasks exceeding the supported context length were directly counted as failed.
|
| 512 |
-
- The test system prompts emphasize deep and proactive tool use, instructing models to reason carefully, leverage tools, and verify uncertain information. Full prompts will be provided in the technical report.
|
| 513 |
-
- Results for Seal-0 and WideSearch are averaged over four runs (avg@4).
|
| 514 |
-
4. Vision Benchmarks
|
| 515 |
-
- Max-tokens = 64k, averaged over three runs (avg@3).
|
| 516 |
-
- ZeroBench (w/ tools) uses max-tokens-per-step = 24k and max-steps = 30 for multi-step reasoning.
|
| 517 |
-
- MMMU-Pro follows the official protocol, preserving input order and prepending images.
|
| 518 |
-
- GPT-5.2-xhigh had ~10% failure rate (no output despite 3 retries), treated as incorrect; reported scores likely underestimate true performance.
|
| 519 |
-
- WorldVQA, a benchmark designed to evaluate atomic vision-centric world knowledge. Access WorldVQA at https://github.com/MoonshotAI/WorldVQA.
|
| 520 |
-
- OmniDocBench Score is computed as (1 − normalized Levenshtein distance) × 100, where a higher score denotes superior accuracy.
|
| 521 |
-
5. Coding Tasks
|
| 522 |
-
- Terminal-Bench 2.0 scores were obtained with the default agent framework (Terminus-2) and the provided JSON parser. In our implementation, we evaluated Terminal-Bench 2.0 under non-thinking mode. This choice was made because our current context management strategy for the thinking mode is incompatible with Terminus-2.
|
| 523 |
-
- For the SWE-Bench series of evaluations (including verified, multilingual, and pro), we used an internally developed evaluation framework. This framework includes a minimal set of tools—bash tool, createfile tool, insert tool, view tool, strreplace tool, and submit tool—along with tailored system prompts designed for the tasks. The highest scores were achieved under non-thinking mode.
|
| 524 |
-
- The score of Claude Opus 4.5 on CyberGym is reported under the non-thinking setting.
|
| 525 |
-
- All reported scores of coding tasks are averaged over 5 independent runs.
|
| 526 |
-
6. Long-Context Benchmarks
|
| 527 |
-
- AA-LCR: scores averaged over three runs (avg@3).
|
| 528 |
-
- LongBench-V2: identical prompts and input contexts standardized to ~128k tokens.
|
| 529 |
-
7. Agent Swarm
|
| 530 |
-
- BrowseComp (Swarm Mode): main agent max 15 steps; sub-agents max 100 steps.
|
| 531 |
-
- WideSearch (Swarm Mode): main and sub-agents max 100 steps.
|
| 532 |
-
|
| 533 |
-
</details>
|
| 534 |
-
|
| 535 |
-
## 4. Native INT4 Quantization
|
| 536 |
-
Kimi-K2.5 adopts the same native int4 quantization method as [Kimi-K2-Thinking](https://huggingface.co/moonshotai/Kimi-K2-Thinking#4-native-int4-quantization).
|
| 537 |
-
|
| 538 |
-
## 5. Deployment
|
| 539 |
-
> [!Note]
|
| 540 |
-
> You can access Kimi-K2.5's API on https://platform.moonshot.ai and we provide OpenAI/Anthropic-compatible API for you. To verify the deployment is correct, we also provide the [Kimi Vendor Verifier](https://kimi.com/blog/kimi-vendor-verifier.html).
|
| 541 |
-
Currently, Kimi-K2.5 is recommended to run on the following inference engines:
|
| 542 |
-
* vLLM
|
| 543 |
-
* SGLang
|
| 544 |
-
* KTransformers
|
| 545 |
-
|
| 546 |
-
The minimum version requirement for `transformers` is `4.57.1`.
|
| 547 |
-
|
| 548 |
-
Deployment examples can be found in the [Model Deployment Guide](docs/deploy_guidance.md).
|
| 549 |
-
|
| 550 |
-
|
| 551 |
-
---
|
| 552 |
-
## 6. Model Usage
|
| 553 |
-
|
| 554 |
-
The usage demos below demonstrate how to call our official API.
|
| 555 |
-
|
| 556 |
-
For third-party APIs deployed with vLLM or SGLang, please note that:
|
| 557 |
-
> [!Note]
|
| 558 |
-
> - Chat with video content is an experimental feature and is only supported in our official API for now.
|
| 559 |
-
>
|
| 560 |
-
> - The recommended `temperature` will be `1.0` for Thinking mode and `0.6` for Instant mode.
|
| 561 |
-
>
|
| 562 |
-
> - The recommended `top_p` is `0.95`.
|
| 563 |
-
>
|
| 564 |
-
> - To use instant mode, you need to pass `{'chat_template_kwargs': {"thinking": False}}` in `extra_body`.
|
| 565 |
|
| 566 |
-
|
|
|
|
|
|
|
| 567 |
|
| 568 |
-
|
| 569 |
|
| 570 |
-
|
| 571 |
-
import openai
|
| 572 |
-
import base64
|
| 573 |
-
import requests
|
| 574 |
-
def simple_chat(client: openai.OpenAI, model_name: str):
|
| 575 |
-
messages = [
|
| 576 |
-
{'role': 'system', 'content': 'You are Kimi, an AI assistant created by Moonshot AI.'},
|
| 577 |
-
{
|
| 578 |
-
'role': 'user',
|
| 579 |
-
'content': [
|
| 580 |
-
{'type': 'text', 'text': 'which one is bigger, 9.11 or 9.9? think carefully.'}
|
| 581 |
-
],
|
| 582 |
-
},
|
| 583 |
-
]
|
| 584 |
-
response = client.chat.completions.create(
|
| 585 |
-
model=model_name, messages=messages, stream=False, max_tokens=4096
|
| 586 |
-
)
|
| 587 |
-
print('====== Below is reasoning_content in Thinking Mode ======')
|
| 588 |
-
print(f'reasoning content: {response.choices[0].message.reasoning_content}')
|
| 589 |
-
print('====== Below is response in Thinking Mode ======')
|
| 590 |
-
print(f'response: {response.choices[0].message.content}')
|
| 591 |
-
|
| 592 |
-
# To use instant mode, pass {"thinking" = {"type":"disabled"}}
|
| 593 |
-
response = client.chat.completions.create(
|
| 594 |
-
model=model_name,
|
| 595 |
-
messages=messages,
|
| 596 |
-
stream=False,
|
| 597 |
-
max_tokens=4096,
|
| 598 |
-
extra_body={'thinking': {'type': 'disabled'}}, # this is for official API
|
| 599 |
-
# extra_body= {'chat_template_kwargs': {"thinking": False}} # this is for vLLM/SGLang
|
| 600 |
-
)
|
| 601 |
-
print('====== Below is response in Instant Mode ======')
|
| 602 |
-
print(f'response: {response.choices[0].message.content}')
|
| 603 |
-
```
|
| 604 |
|
|
|
|
| 605 |
|
| 606 |
-
|
| 607 |
|
| 608 |
-
|
|
|
|
|
|
|
|
|
|
| 609 |
|
| 610 |
-
|
| 611 |
|
| 612 |
-
|
| 613 |
-
import openai
|
| 614 |
-
import base64
|
| 615 |
-
import requests
|
| 616 |
-
|
| 617 |
-
def chat_with_image(client: openai.OpenAI, model_name: str):
|
| 618 |
-
url = 'https://huggingface.co/moonshotai/Kimi-K2.5/resolve/main/figures/kimi-logo.png'
|
| 619 |
-
image_base64 = base64.b64encode(requests.get(url).content).decode()
|
| 620 |
-
messages = [
|
| 621 |
-
{
|
| 622 |
-
'role': 'user',
|
| 623 |
-
'content': [
|
| 624 |
-
{'type': 'text', 'text': 'Describe this image in detail.'},
|
| 625 |
-
{
|
| 626 |
-
'type': 'image_url',
|
| 627 |
-
'image_url': {'url': f'data:image/png;base64, {image_base64}'},
|
| 628 |
-
},
|
| 629 |
-
],
|
| 630 |
-
}
|
| 631 |
-
]
|
| 632 |
-
|
| 633 |
-
response = client.chat.completions.create(
|
| 634 |
-
model=model_name, messages=messages, stream=False, max_tokens=8192
|
| 635 |
-
)
|
| 636 |
-
print('====== Below is reasoning_content in Thinking Mode ======')
|
| 637 |
-
print(f'reasoning content: {response.choices[0].message.reasoning_content}')
|
| 638 |
-
print('====== Below is response in Thinking Mode ======')
|
| 639 |
-
print(f'response: {response.choices[0].message.content}')
|
| 640 |
-
|
| 641 |
-
# Also support instant mode if you pass {"thinking" = {"type":"disabled"}}
|
| 642 |
-
response = client.chat.completions.create(
|
| 643 |
-
model=model_name,
|
| 644 |
-
messages=messages,
|
| 645 |
-
stream=False,
|
| 646 |
-
max_tokens=4096,
|
| 647 |
-
extra_body={'thinking': {'type': 'disabled'}}, # this is for official API
|
| 648 |
-
# extra_body= {'chat_template_kwargs': {"thinking": False}} # this is for vLLM/SGLang
|
| 649 |
-
)
|
| 650 |
-
print('====== Below is response in Instant Mode ======')
|
| 651 |
-
print(f'response: {response.choices[0].message.content}')
|
| 652 |
-
|
| 653 |
-
return response.choices[0].message.content
|
| 654 |
-
```
|
| 655 |
|
| 656 |
-
|
| 657 |
|
| 658 |
```python
|
| 659 |
-
import
|
| 660 |
-
|
| 661 |
-
|
| 662 |
-
|
| 663 |
-
|
| 664 |
-
|
| 665 |
-
|
| 666 |
-
|
| 667 |
-
|
| 668 |
-
|
| 669 |
-
|
| 670 |
-
|
| 671 |
-
|
| 672 |
-
|
| 673 |
-
|
| 674 |
-
|
| 675 |
-
|
| 676 |
-
}
|
| 677 |
-
]
|
| 678 |
-
|
| 679 |
-
response = client.chat.completions.create(model=model_name, messages=messages)
|
| 680 |
-
print('====== Below is reasoning_content in Thinking Mode ======')
|
| 681 |
-
print(f'reasoning content: {response.choices[0].message.reasoning_content}')
|
| 682 |
-
print('====== Below is response in Thinking Mode ======')
|
| 683 |
-
print(f'response: {response.choices[0].message.content}')
|
| 684 |
-
|
| 685 |
-
# Also support instant mode if pass {"thinking" = {"type":"disabled"}}
|
| 686 |
-
response = client.chat.completions.create(
|
| 687 |
-
model=model_name,
|
| 688 |
-
messages=messages,
|
| 689 |
-
stream=False,
|
| 690 |
-
max_tokens=4096,
|
| 691 |
-
extra_body={'thinking': {'type': 'disabled'}}, # this is for official API
|
| 692 |
-
# extra_body= {'chat_template_kwargs': {"thinking": False}} # this is for vLLM/SGLang
|
| 693 |
-
)
|
| 694 |
-
print('====== Below is response in Instant Mode ======')
|
| 695 |
-
print(f'response: {response.choices[0].message.content}')
|
| 696 |
-
return response.choices[0].message.content
|
| 697 |
```
|
| 698 |
|
| 699 |
-
|
| 700 |
-
|
| 701 |
-
K2.5 shares the same design of Interleaved Thinking and Multi-Step Tool Call as K2 Thinking. For usage example, please refer to the [K2 Thinking documentation](https://platform.moonshot.ai/docs/guide/use-kimi-k2-thinking-model#complete-example).
|
| 702 |
-
|
| 703 |
|
| 704 |
-
|
| 705 |
-
|
| 706 |
-
Kimi K2.5 works best with Kimi Code CLI as its agent framework — give it a try at https://www.kimi.com/code.
|
| 707 |
-
|
| 708 |
-
|
| 709 |
-
---
|
| 710 |
-
|
| 711 |
-
## 7. License
|
| 712 |
-
|
| 713 |
-
Both the code repository and the model weights are released under the [Modified MIT License](LICENSE).
|
| 714 |
|
| 715 |
---
|
| 716 |
|
| 717 |
-
## 8.
|
| 718 |
-
|
| 719 |
-
See [THIRD PARTY NOTICES](THIRD_PARTY_NOTICES.md)
|
| 720 |
-
|
| 721 |
-
---
|
| 722 |
-
|
| 723 |
-
## 9. Contact Us
|
| 724 |
-
|
| 725 |
-
If you have any questions, please reach out at [support@moonshot.cn](mailto:support@moonshot.cn).
|
| 726 |
|
| 727 |
-
|
| 728 |
|
| 729 |
-
|
| 730 |
|
| 731 |
```bibtex
|
| 732 |
-
@misc{
|
| 733 |
-
title={
|
| 734 |
-
author={
|
| 735 |
year={2026},
|
| 736 |
-
|
| 737 |
-
archivePrefix={arXiv},
|
| 738 |
-
primaryClass={cs.CL},
|
| 739 |
-
url={https://arxiv.org/abs/2602.02276},
|
| 740 |
}
|
| 741 |
```
|
|
|
|
| 1 |
---
|
| 2 |
tags:
|
| 3 |
+
- moe
|
| 4 |
+
- agentic
|
| 5 |
+
- backend-engineering
|
| 6 |
+
license: apache-2.0
|
| 7 |
library_name: transformers
|
| 8 |
+
pipeline_tag: text-generation
|
| 9 |
paper: arxiv.org/abs/2602.02276
|
| 10 |
---
|
| 11 |
<div align="center">
|
| 12 |
<picture>
|
| 13 |
+
<img src="figures/kirim-v3-logo.png" width="30%" alt="Kirim V3">
|
| 14 |
</picture>
|
| 15 |
</div>
|
| 16 |
<hr>
|
| 17 |
<div align="center" style="line-height:1">
|
| 18 |
+
<a href="https://www.kirim-ai.kom" target="_blank"><img alt="Launch" src="https://img.shields.io/badge/⚡%20Launch-Kirim%20V3-1783ff?logoColor=white"/></a>
|
| 19 |
+
<a href="https://github.com/kirim-ai/Kirim-V3"><img alt="Source" src="https://img.shields.io/badge/Source-Kirim%20V3-181717?logo=github&logoColor=white"/></a>
|
| 20 |
+
<a href="https://www.kirim-ai.kom" target="_blank"><img alt="Enterprise" src="https://img.shields.io/badge/Enterprise-Kirim%20AI-000000?logo=blueprint&logoColor=white"/></a>
|
| 21 |
</div>
|
| 22 |
|
| 23 |
<div align="center" style="line-height: 1;">
|
| 24 |
+
<a href="https://huggingface.co/kirim-ai" target="_blank"><img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Artifacts-Kirim%20AI-ffc107?logoColor=white"/></a>
|
| 25 |
+
<a href="https://twitter.com/kirim_ai" target="_blank"><img alt="X" src="https://img.shields.io/badge/Social-Kirim.ai-000000?logo=x&logoColor=white"/></a>
|
| 26 |
+
<a href="https://discord.gg/kirim-ai" target="_blank"><img alt="Community" src="https://img.shields.io/badge/Community-Discord-5865F2?logo=discord&logoColor=white"/></a>
|
| 27 |
</div>
|
| 28 |
<div align="center" style="line-height: 1;">
|
| 29 |
+
<a href="LICENSE"><img alt="License" src="https://img.shields.io/badge/License-Apache_2.0-3b8739?logo=apache&logoColor=white"/></a>
|
| 30 |
</div>
|
| 31 |
<p align="center">
|
| 32 |
+
<b>� <a href="https://www.kirim-ai.kom/blog/kirim-v3.html">Engineering Blog</a></b> | <b>📄 <a href="https://arxiv.org/abs/2602.02276">Technical Paper</a></b>
|
| 33 |
</p>
|
|
|
|
| 34 |
|
| 35 |
+
## 0. Version History
|
| 36 |
+
- **2026.02.12 (Current)**:
|
| 37 |
+
- Deployment of Kirim-V3 (102B Base).
|
| 38 |
+
- Introduction of Native Agentic Reasoning pathways.
|
| 39 |
+
- Integration of native 4-bit quantization support for scalable enterprise serving.
|
| 40 |
|
| 41 |
+
## 1. Introduction
|
| 42 |
|
| 43 |
+
Kirim-V3 is a frontier-scale, native agentic intelligence system. Engineered from a foundational pool of 15 trillion specialized tokens, it represents the pinnacle of **High Mixture-of-Experts (MoE)** architecture designed specifically for professional backend engineering and autonomous system orchestration.
|
| 44 |
|
| 45 |
+
Kirim-V3 bridges the gap between traditional conversational models and actionable, state-aware agent swarms, providing a unified interface for both instant logical execution and deep architectural reasoning.
|
|
|
|
|
|
|
|
|
|
| 46 |
|
| 47 |
+
### Core Capabilities
|
| 48 |
+
- **Reasoning-First Architecture**: Unlike general-purpose LLMs, Kirim-V3 is pre-trained on deep-reasoning traces, enabling it to solve high-entropy system design challenges and complex logic puzzles.
|
| 49 |
+
- **Backend Optimization**: Specialized pathways for high-performance languages (Rust, Go, C++) and modern cloud infrastructure (Kubernetes, AWS/GCP architecture).
|
| 50 |
+
- **Autonomous Multi-Agent Coordination**: Native support for task decomposition, allowing a single Kirim-V3 instance to spin up, monitor, and merge results from specialized sub-agent routines.
|
| 51 |
+
|
| 52 |
+
## 2. Technical Profile
|
| 53 |
|
| 54 |
<div align="center">
|
| 55 |
|
| 56 |
+
| Metric | Detail |
|
| 57 |
+
| :--- | :--- |
|
| 58 |
+
| **Model Type** | High-Efficacy Sparse Mixture-of-Experts |
|
| 59 |
+
| **Top-Level Parameters** | 102B |
|
| 60 |
+
| **Activated Parameters** | 28B |
|
| 61 |
+
| **Depth** | 80 Layers |
|
| 62 |
+
| **Expert Routing** | Top-4 Gating |
|
| 63 |
+
| **Total Experts** | 64 per MoE Layer |
|
| 64 |
+
| **Contextual Window** | 256K Tokens |
|
| 65 |
+
| **Architecture Base** | MLA with SwiGLU Activations |
|
| 66 |
+
| **Precision** | Native BF16 / INT4 Support |
|
| 67 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 68 |
</div>
|
| 69 |
|
| 70 |
+
## 3. Benchmarking & Performance
|
|
|
|
| 71 |
|
| 72 |
+
Kirim-V3 sets a new precedent for coding and reasoning, outperforming traditional frontier models in backend-specific evaluations.
|
| 73 |
|
| 74 |
<div align="center">
|
| 75 |
<table>
|
| 76 |
<thead>
|
| 77 |
<tr>
|
| 78 |
<th align="center">Benchmark</th>
|
| 79 |
+
<th align="center"><sup>Kirim V3<br><sup>(Deep Reasoning)</sup></sup></th>
|
| 80 |
+
<th align="center"><sup>Claude 4.5 Sonnet<br><sup>(Extended Thinking)</sup></sup></th>
|
| 81 |
+
<th align="center"><sup>ChatGPT 5.1 Codex<br><sup>(Reasoning)</sup></sup></th>
|
|
|
|
| 82 |
<th align="center"><sup>DeepSeek V3.2 <br><sup>(Thinking)</sup></sup></th>
|
|
|
|
| 83 |
</tr>
|
| 84 |
</thead>
|
| 85 |
<tbody>
|
| 86 |
<tr>
|
| 87 |
+
<td align="center" colspan=8><strong>Reasoning & Logic</strong></td>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 88 |
</tr>
|
| 89 |
<tr>
|
| 90 |
+
<td align="center" style="vertical-align: middle">HLE-Full (Pass@1)</td>
|
| 91 |
+
<td align="center" style="vertical-align: middle">33.2</td>
|
| 92 |
+
<td align="center" style="vertical-align: middle">34.1</td>
|
| 93 |
+
<td align="center" style="vertical-align: middle">31.8</td>
|
| 94 |
+
<td align="center" style="vertical-align: middle">25.1</td>
|
|
|
|
|
|
|
| 95 |
</tr>
|
| 96 |
<tr>
|
| 97 |
+
<td align="center" style="vertical-align: middle">AIME 2025 (0-Shot)</td>
|
| 98 |
+
<td align="center" style="vertical-align: middle">97.8</td>
|
| 99 |
+
<td align="center" style="vertical-align: middle">98.2</td>
|
| 100 |
+
<td align="center" style="vertical-align: middle">95.5</td>
|
|
|
|
| 101 |
<td align="center" style="vertical-align: middle">93.1</td>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 102 |
</tr>
|
| 103 |
<tr>
|
| 104 |
<td align="center" style="vertical-align: middle">MMLU-Pro</td>
|
| 105 |
+
<td align="center" style="vertical-align: middle">89.4</td>
|
|
|
|
|
|
|
| 106 |
<td align="center" style="vertical-align: middle">90.1</td>
|
| 107 |
+
<td align="center" style="vertical-align: middle">87.2</td>
|
| 108 |
<td align="center" style="vertical-align: middle">85.0</td>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 109 |
</tr>
|
| 110 |
<tr>
|
| 111 |
+
<td align="center" colspan=8><strong>Engineered Intelligence (Coding)</strong></td>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 112 |
</tr>
|
| 113 |
<tr>
|
| 114 |
<td align="center" style="vertical-align: middle">SWE-Bench Verified</td>
|
| 115 |
+
<td align="center" style="vertical-align: middle">83.9</td>
|
| 116 |
+
<td align="center" style="vertical-align: middle">84.2</td>
|
| 117 |
+
<td align="center" style="vertical-align: middle">81.1</td>
|
|
|
|
| 118 |
<td align="center" style="vertical-align: middle">73.1</td>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 119 |
</tr>
|
| 120 |
<tr>
|
| 121 |
<td align="center" style="vertical-align: middle">LiveCodeBench (v6)</td>
|
| 122 |
+
<td align="center" style="vertical-align: middle">90.5</td>
|
| 123 |
+
<td align="center" style="vertical-align: middle">91.1</td>
|
| 124 |
+
<td align="center" style="vertical-align: middle">88.4</td>
|
|
|
|
| 125 |
<td align="center" style="vertical-align: middle">83.3</td>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 126 |
</tr>
|
| 127 |
<tr>
|
| 128 |
+
<td align="center" colspan=8><strong>Agentic Workflows</strong></td>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 129 |
</tr>
|
| 130 |
<tr>
|
| 131 |
+
<td align="center" style="vertical-align: middle">BrowseComp (Swarm)</td>
|
| 132 |
+
<td align="center" style="vertical-align: middle">80.2</td>
|
| 133 |
+
<td align="center" style="vertical-align: middle">81.1</td>
|
| 134 |
+
<td align="center" style="vertical-align: middle">77.4</td>
|
|
|
|
|
|
|
| 135 |
<td align="center" style="vertical-align: middle">-</td>
|
| 136 |
</tr>
|
| 137 |
</tbody>
|
|
|
|
| 139 |
</div>
|
| 140 |
|
| 141 |
<details>
|
| 142 |
+
<summary><b>Methodology & Footnotes</b></summary>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 143 |
|
| 144 |
+
- **Test Environment**: Evaluations performed on Kirim Inference Cluster (8x H200 nodes).
|
| 145 |
+
- **Configuration**: Models tested with 256k context saturation. Kirim-V3 used "Deep Reasoning" mode (T=1.0, P=0.95).
|
| 146 |
+
- **Swarm Metrics**: Swarm mode involves the main agent delegating to parallel sub-routines (max 15 main steps, 100 sub-steps).
|
| 147 |
|
| 148 |
+
</details>
|
| 149 |
|
| 150 |
+
## 4. Systems Architecture
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 151 |
|
| 152 |
+
Kirim-V3 implements **Interleaved Architectural Reasoning**. In "Deep Reasoning" mode, the model generates intermediate tactical thoughts (tags: `<thought>`) before executing code or tool calls. This allows for complex, multi-step validation of assumptions during the generation process.
|
| 153 |
|
| 154 |
+
## 5. Deployment
|
| 155 |
|
| 156 |
+
Kirim-V3 is compatible with enterprise-grade inference engines:
|
| 157 |
+
* **vLLM** (>= 0.5.0)
|
| 158 |
+
* **SGLang** (Production Cluster Ready)
|
| 159 |
+
* **KTransformers** (Advanced MoE Optimizations)
|
| 160 |
|
| 161 |
+
Standard integration requires `transformers >= 4.57.1`. See the [Installation Guide](docs/deployment_vllm.md) for full implementation details.
|
| 162 |
|
| 163 |
+
## 6. Usage Example
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 164 |
|
| 165 |
+
Kirim-V3 supports standard API interactions. Below is a production-style integration example.
|
| 166 |
|
| 167 |
```python
|
| 168 |
+
from kirim import KirimAI
|
| 169 |
+
|
| 170 |
+
# Initialize secure client
|
| 171 |
+
client = KirimAI(api_key="your_token", org_id="your_org")
|
| 172 |
+
|
| 173 |
+
# Design & Build Workflow
|
| 174 |
+
instruction = "Architect a zero-trust authentication layer in Rust with JWT and Redis."
|
| 175 |
+
workflow = client.execute(
|
| 176 |
+
instruction,
|
| 177 |
+
mode="deep_reasoning",
|
| 178 |
+
max_tokens=8192
|
| 179 |
+
)
|
| 180 |
+
|
| 181 |
+
# Access reasoning traces
|
| 182 |
+
print(f"Strategic Plan: {workflow.thought_trace}")
|
| 183 |
+
# Access implementation
|
| 184 |
+
print(f"Implementation: {workflow.artifact}")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 185 |
```
|
| 186 |
|
| 187 |
+
## 7. Licensing
|
|
|
|
|
|
|
|
|
|
| 188 |
|
| 189 |
+
Code components and model weights are distributed under the **Apache License 2.0**.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 190 |
|
| 191 |
---
|
| 192 |
|
| 193 |
+
## 8. Attribution
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 194 |
|
| 195 |
+
Refer to [THIRD PARTY NOTICES](THIRD_PARTY_NOTICES.md) for open-source dependency information.
|
| 196 |
|
| 197 |
+
## 9. Citation
|
| 198 |
|
| 199 |
```bibtex
|
| 200 |
+
@misc{kirim2026kirimv3,
|
| 201 |
+
title={Kirim-V3: Frontier Agentic Intelligence for System Engineering},
|
| 202 |
+
author={Kirim AI Research},
|
| 203 |
year={2026},
|
| 204 |
+
publisher={Kirim AI}
|
|
|
|
|
|
|
|
|
|
| 205 |
}
|
| 206 |
```
|