Files changed (1) hide show
  1. README.md +109 -644
README.md CHANGED
@@ -1,493 +1,137 @@
1
  ---
2
  tags:
3
- - compressed-tensors
4
- license: other
5
- license_name: modified-mit
 
6
  library_name: transformers
7
- pipeline_tag: image-text-to-text
8
  paper: arxiv.org/abs/2602.02276
9
  ---
10
  <div align="center">
11
  <picture>
12
- <img src="figures/kimi-logo.png" width="30%" alt="Kimi K2.5">
13
  </picture>
14
  </div>
15
  <hr>
16
  <div align="center" style="line-height:1">
17
- <a href="https://www.kimi.com" target="_blank"><img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-Kimi%20K2.5-ff6b6b?color=1783ff&logoColor=white"/></a>
18
- <a href="https://github.com/moonshotai/Kimi-K2.5"><img alt="github" src="https://img.shields.io/badge/Github-Kimi%20K2.5-181717?logo=github&color=1783ff&logoColor=white"/></a>
19
- <a href="https://www.moonshot.ai" target="_blank"><img alt="Homepage" src="https://img.shields.io/badge/Homepage-Moonshot%20AI-white?logo=Kimi&logoColor=white"/></a>
20
  </div>
21
 
22
  <div align="center" style="line-height: 1;">
23
- <a href="https://huggingface.co/moonshotai" target="_blank"><img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Moonshot%20AI-ffc107?color=ffc107&logoColor=white"/></a>
24
- <a href="https://twitter.com/kimi_moonshot" target="_blank"><img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-Kimi.ai-white?logo=x&logoColor=white"/></a>
25
- <a href="https://discord.gg/TYU2fdJykW" target="_blank"><img alt="Discord" src="https://img.shields.io/badge/Discord-Kimi.ai-white?logo=discord&logoColor=white"/></a>
26
  </div>
27
  <div align="center" style="line-height: 1;">
28
- <a href="https://huggingface.co/moonshotai/Kimi-K2.5/blob/main/LICENSE"><img alt="License" src="https://img.shields.io/badge/License-Modified_MIT-f5de53?&color=f5de53"/></a>
29
  </div>
30
  <p align="center">
31
- <b>📰&nbsp;&nbsp;<a href="https://www.kimi.com/blog/kimi-k2-5.html">Tech Blog</a></b> &nbsp;&nbsp;&nbsp; | &nbsp;&nbsp;&nbsp; <b>📄&nbsp;&nbsp;<a href="https://arxiv.org/abs/2602.02276">Paper</a></b>
32
  </p>
33
- </p>
34
 
35
- ## 0. Changelog
36
- - 2026.1.29:
37
- - The default system prompt might cause confusion to users and unexpected behaviours, so we remove it.
38
- - The token `<|media_start|>` is incorrect; it has been replaced with `<|media_begin|>` in the chat template.
 
39
 
40
- ## 1. Model Introduction
41
 
42
- Kimi K2.5 is an open-source, native multimodal agentic model built through continual pretraining on approximately 15 trillion mixed visual and text tokens atop Kimi-K2-Base. It seamlessly integrates vision and language understanding with advanced agentic capabilities, instant and thinking modes, as well as conversational and agentic paradigms.
43
 
44
- ### Key Features
45
- - **Native Multimodality**: Pre-trained on vision–language tokens, K2.5 excels in visual knowledge, cross-modal reasoning, and agentic tool use grounded in visual inputs.
46
- - **Coding with Vision**: K2.5 generates code from visual specifications (UI designs, video workflows) and autonomously orchestrates tools for visual data processing.
47
- - **Agent Swarm**: K2.5 transitions from single-agent scaling to a self-directed, coordinated swarm-like execution scheme. It decomposes complex tasks into parallel sub-tasks executed by dynamically instantiated, domain-specific agents.
48
 
49
- ## 2. Model Summary
 
 
 
 
 
50
 
51
  <div align="center">
52
 
 
 
 
 
 
 
 
 
 
 
 
53
 
54
- | | |
55
- |:---:|:---:|
56
- | **Architecture** | Mixture-of-Experts (MoE) |
57
- | **Total Parameters** | 1T |
58
- | **Activated Parameters** | 32B |
59
- | **Number of Layers** (Dense layer included) | 61 |
60
- | **Number of Dense Layers** | 1 |
61
- | **Attention Hidden Dimension** | 7168 |
62
- | **MoE Hidden Dimension** (per Expert) | 2048 |
63
- | **Number of Attention Heads** | 64 |
64
- | **Number of Experts** | 384 |
65
- | **Selected Experts per Token** | 8 |
66
- | **Number of Shared Experts** | 1 |
67
- | **Vocabulary Size** | 160K |
68
- | **Context Length** | 256K |
69
- | **Attention Mechanism** | MLA |
70
- | **Activation Function** | SwiGLU |
71
- | **Vision Encoder** | MoonViT |
72
- | **Parameters of Vision Encoder** | 400M |
73
  </div>
74
 
75
- ## 3. Evaluation Results
76
-
77
 
 
78
 
79
  <div align="center">
80
  <table>
81
  <thead>
82
  <tr>
83
  <th align="center">Benchmark</th>
84
- <th align="center"><sup>Kimi K2.5<br><sup>(Thinking)</sup></sup></th>
85
- <th align="center"><sup>GPT-5.2 <br><sup>(xhigh)</sup></sup></th>
86
- <th align="center"><sup>Claude 4.5 Opus <br><sup>(Extended Thinking)</sup></sup></th>
87
- <th align="center"><sup>Gemini 3 Pro <br><sup>(High Thinking Level)</sup></sup></th>
88
  <th align="center"><sup>DeepSeek V3.2 <br><sup>(Thinking)</sup></sup></th>
89
- <th align="center"><sup>Qwen3-VL-<br>235B-A22B-<br>Thinking</sup></th>
90
  </tr>
91
  </thead>
92
  <tbody>
93
  <tr>
94
- <td align="center" colspan=8><strong>Reasoning &amp; Knowledge</strong></td>
95
- </tr>
96
- <tr>
97
- <td align="center" style="vertical-align: middle">HLE-Full</td>
98
- <td align="center" style="vertical-align: middle">30.1</td>
99
- <td align="center" style="vertical-align: middle">34.5</td>
100
- <td align="center" style="vertical-align: middle">30.8</td>
101
- <td align="center" style="vertical-align: middle">37.5</td>
102
- <td align="center" style="vertical-align: middle">25.1<sup>†</sup></td>
103
- <td align="center" style="vertical-align: middle">-</td>
104
  </tr>
105
  <tr>
106
- <td align="center" style="vertical-align: middle">HLE-Full<br>(w/ tools)</td>
107
- <td align="center" style="vertical-align: middle">50.2</td>
108
- <td align="center" style="vertical-align: middle">45.5</td>
109
- <td align="center" style="vertical-align: middle">43.2</td>
110
- <td align="center" style="vertical-align: middle">45.8</td>
111
- <td align="center" style="vertical-align: middle">40.8<sup>†</sup></td>
112
- <td align="center" style="vertical-align: middle">-</td>
113
  </tr>
114
  <tr>
115
- <td align="center" style="vertical-align: middle">AIME 2025</td>
116
- <td align="center" style="vertical-align: middle">96.1</td>
117
- <td align="center" style="vertical-align: middle">100</td>
118
- <td align="center" style="vertical-align: middle">92.8</td>
119
- <td align="center" style="vertical-align: middle">95.0</td>
120
  <td align="center" style="vertical-align: middle">93.1</td>
121
- <td align="center" style="vertical-align: middle">-</td>
122
- </tr>
123
- <tr>
124
- <td align="center" style="vertical-align: middle">HMMT 2025 (Feb)</td>
125
- <td align="center" style="vertical-align: middle">95.4</td>
126
- <td align="center" style="vertical-align: middle">99.4</td>
127
- <td align="center" style="vertical-align: middle">92.9*</td>
128
- <td align="center" style="vertical-align: middle">97.3*</td>
129
- <td align="center" style="vertical-align: middle">92.5</td>
130
- <td align="center" style="vertical-align: middle">-</td>
131
- </tr>
132
- <tr>
133
- <td align="center" style="vertical-align: middle">IMO-AnswerBench</td>
134
- <td align="center" style="vertical-align: middle">81.8</td>
135
- <td align="center" style="vertical-align: middle">86.3</td>
136
- <td align="center" style="vertical-align: middle">78.5*</td>
137
- <td align="center" style="vertical-align: middle">83.1*</td>
138
- <td align="center" style="vertical-align: middle">78.3</td>
139
- <td align="center" style="vertical-align: middle">-</td>
140
- </tr>
141
- <tr>
142
- <td align="center" style="vertical-align: middle">GPQA-Diamond</td>
143
- <td align="center" style="vertical-align: middle">87.6</td>
144
- <td align="center" style="vertical-align: middle">92.4</td>
145
- <td align="center" style="vertical-align: middle">87.0</td>
146
- <td align="center" style="vertical-align: middle">91.9</td>
147
- <td align="center" style="vertical-align: middle">82.4</td>
148
- <td align="center" style="vertical-align: middle">-</td>
149
  </tr>
150
  <tr>
151
  <td align="center" style="vertical-align: middle">MMLU-Pro</td>
152
- <td align="center" style="vertical-align: middle">87.1</td>
153
- <td align="center" style="vertical-align: middle">86.7*</td>
154
- <td align="center" style="vertical-align: middle">89.3*</td>
155
  <td align="center" style="vertical-align: middle">90.1</td>
 
156
  <td align="center" style="vertical-align: middle">85.0</td>
157
- <td align="center" style="vertical-align: middle">-</td>
158
- </tr>
159
- <tr>
160
- <td align="center" colspan=8><strong>Image &amp; Video</strong></td>
161
- </tr>
162
- <tr>
163
- <td align="center" style="vertical-align: middle">MMMU-Pro</td>
164
- <td align="center" style="vertical-align: middle">78.5</td>
165
- <td align="center" style="vertical-align: middle">79.5*</td>
166
- <td align="center" style="vertical-align: middle">74.0</td>
167
- <td align="center" style="vertical-align: middle">81.0</td>
168
- <td align="center" style="vertical-align: middle">-</td>
169
- <td align="center" style="vertical-align: middle">69.3</td>
170
- </tr>
171
- <tr>
172
- <td align="center" style="vertical-align: middle">CharXiv (RQ)</td>
173
- <td align="center" style="vertical-align: middle">77.5</td>
174
- <td align="center" style="vertical-align: middle">82.1</td>
175
- <td align="center" style="vertical-align: middle">67.2*</td>
176
- <td align="center" style="vertical-align: middle">81.4</td>
177
- <td align="center" style="vertical-align: middle">-</td>
178
- <td align="center" style="vertical-align: middle">66.1</td>
179
- </tr>
180
- <tr>
181
- <td align="center" style="vertical-align: middle">MathVision</td>
182
- <td align="center" style="vertical-align: middle">84.2</td>
183
- <td align="center" style="vertical-align: middle">83.0</td>
184
- <td align="center" style="vertical-align: middle">77.1*</td>
185
- <td align="center" style="vertical-align: middle">86.1*</td>
186
- <td align="center" style="vertical-align: middle">-</td>
187
- <td align="center" style="vertical-align: middle">74.6</td>
188
- </tr>
189
- <tr>
190
- <td align="center" style="vertical-align: middle">MathVista (mini)</td>
191
- <td align="center" style="vertical-align: middle">90.1</td>
192
- <td align="center" style="vertical-align: middle">82.8*</td>
193
- <td align="center" style="vertical-align: middle">80.2*</td>
194
- <td align="center" style="vertical-align: middle">89.8*</td>
195
- <td align="center" style="vertical-align: middle">-</td>
196
- <td align="center" style="vertical-align: middle">85.8</td>
197
- </tr>
198
- <tr>
199
- <td align="center" style="vertical-align: middle">ZeroBench</td>
200
- <td align="center" style="vertical-align: middle">9</td>
201
- <td align="center" style="vertical-align: middle">9*</td>
202
- <td align="center" style="vertical-align: middle">3*</td>
203
- <td align="center" style="vertical-align: middle">8*</td>
204
- <td align="center" style="vertical-align: middle">-</td>
205
- <td align="center" style="vertical-align: middle">4*</td>
206
- </tr>
207
- <tr>
208
- <td align="center" style="vertical-align: middle">ZeroBench<br>(w/ tools)</td>
209
- <td align="center" style="vertical-align: middle">11</td>
210
- <td align="center" style="vertical-align: middle">7*</td>
211
- <td align="center" style="vertical-align: middle">9*</td>
212
- <td align="center" style="vertical-align: middle">12*</td>
213
- <td align="center" style="vertical-align: middle">-</td>
214
- <td align="center" style="vertical-align: middle">3*</td>
215
- </tr>
216
- <tr>
217
- <td align="center" style="vertical-align: middle">OCRBench</td>
218
- <td align="center" style="vertical-align: middle">92.3</td>
219
- <td align="center" style="vertical-align: middle">80.7*</td>
220
- <td align="center" style="vertical-align: middle">86.5*</td>
221
- <td align="center" style="vertical-align: middle">90.3*</td>
222
- <td align="center" style="vertical-align: middle">-</td>
223
- <td align="center" style="vertical-align: middle">87.5</td>
224
- </tr>
225
- <tr>
226
- <td align="center" style="vertical-align: middle">OmniDocBench 1.5</td>
227
- <td align="center" style="vertical-align: middle">88.8</td>
228
- <td align="center" style="vertical-align: middle">85.7</td>
229
- <td align="center" style="vertical-align: middle">87.7*</td>
230
- <td align="center" style="vertical-align: middle">88.5</td>
231
- <td align="center" style="vertical-align: middle">-</td>
232
- <td align="center" style="vertical-align: middle">82.0*</td>
233
- </tr>
234
- <tr>
235
- <td align="center" style="vertical-align: middle">InfoVQA (val)</td>
236
- <td align="center" style="vertical-align: middle">92.6</td>
237
- <td align="center" style="vertical-align: middle">84*</td>
238
- <td align="center" style="vertical-align: middle">76.9*</td>
239
- <td align="center" style="vertical-align: middle">57.2*</td>
240
- <td align="center" style="vertical-align: middle">-</td>
241
- <td align="center" style="vertical-align: middle">89.5</td>
242
- </tr>
243
- <tr>
244
- <td align="center" style="vertical-align: middle">SimpleVQA</td>
245
- <td align="center" style="vertical-align: middle">71.2</td>
246
- <td align="center" style="vertical-align: middle">55.8*</td>
247
- <td align="center" style="vertical-align: middle">69.7*</td>
248
- <td align="center" style="vertical-align: middle">69.7*</td>
249
- <td align="center" style="vertical-align: middle">-</td>
250
- <td align="center" style="vertical-align: middle">56.8*</td>
251
- </tr>
252
- <tr>
253
- <td align="center" style="vertical-align: middle"><a href="https://github.com/MoonshotAI/WorldVQA">WorldVQA</a></td>
254
- <td align="center" style="vertical-align: middle">46.3</td>
255
- <td align="center" style="vertical-align: middle">28.0</td>
256
- <td align="center" style="vertical-align: middle">36.8</td>
257
- <td align="center" style="vertical-align: middle">47.4</td>
258
- <td align="center" style="vertical-align: middle">-</td>
259
- <td align="center" style="vertical-align: middle">23.5</td>
260
- </tr>
261
- <tr>
262
- <td align="center" style="vertical-align: middle">VideoMMMU</td>
263
- <td align="center" style="vertical-align: middle">86.6</td>
264
- <td align="center" style="vertical-align: middle">85.9</td>
265
- <td align="center" style="vertical-align: middle">84.4*</td>
266
- <td align="center" style="vertical-align: middle">87.6</td>
267
- <td align="center" style="vertical-align: middle">-</td>
268
- <td align="center" style="vertical-align: middle">80.0</td>
269
- </tr>
270
- <tr>
271
- <td align="center" style="vertical-align: middle">MMVU</td>
272
- <td align="center" style="vertical-align: middle">80.4</td>
273
- <td align="center" style="vertical-align: middle">80.8*</td>
274
- <td align="center" style="vertical-align: middle">77.3</td>
275
- <td align="center" style="vertical-align: middle">77.5</td>
276
- <td align="center" style="vertical-align: middle">-</td>
277
- <td align="center" style="vertical-align: middle">71.1</td>
278
- </tr>
279
- <tr>
280
- <td align="center" style="vertical-align: middle">MotionBench</td>
281
- <td align="center" style="vertical-align: middle">70.4</td>
282
- <td align="center" style="vertical-align: middle">64.8</td>
283
- <td align="center" style="vertical-align: middle">60.3</td>
284
- <td align="center" style="vertical-align: middle">70.3</td>
285
- <td align="center" style="vertical-align: middle">-</td>
286
- <td align="center" style="vertical-align: middle">-</td>
287
- </tr>
288
- <tr>
289
- <td align="center" style="vertical-align: middle">VideoMME</td>
290
- <td align="center" style="vertical-align: middle">87.4</td>
291
- <td align="center" style="vertical-align: middle">86.0*</td>
292
- <td align="center" style="vertical-align: middle">-</td>
293
- <td align="center" style="vertical-align: middle">88.4*</td>
294
- <td align="center" style="vertical-align: middle">-</td>
295
- <td align="center" style="vertical-align: middle">79.0</td>
296
  </tr>
297
  <tr>
298
- <td align="center" style="vertical-align: middle">LongVideoBench</td>
299
- <td align="center" style="vertical-align: middle">79.8</td>
300
- <td align="center" style="vertical-align: middle">76.5*</td>
301
- <td align="center" style="vertical-align: middle">67.2*</td>
302
- <td align="center" style="vertical-align: middle">77.7*</td>
303
- <td align="center" style="vertical-align: middle">-</td>
304
- <td align="center" style="vertical-align: middle">65.6*</td>
305
- </tr>
306
- <tr>
307
- <td align="center" style="vertical-align: middle">LVBench</td>
308
- <td align="center" style="vertical-align: middle">75.9</td>
309
- <td align="center" style="vertical-align: middle">-</td>
310
- <td align="center" style="vertical-align: middle">-</td>
311
- <td align="center" style="vertical-align: middle">73.5*</td>
312
- <td align="center" style="vertical-align: middle">-</td>
313
- <td align="center" style="vertical-align: middle">63.6</td>
314
- </tr>
315
- <tr>
316
- <td align="center" colspan=8><strong>Coding</strong></td>
317
  </tr>
318
  <tr>
319
  <td align="center" style="vertical-align: middle">SWE-Bench Verified</td>
320
- <td align="center" style="vertical-align: middle">76.8</td>
321
- <td align="center" style="vertical-align: middle">80.0</td>
322
- <td align="center" style="vertical-align: middle">80.9</td>
323
- <td align="center" style="vertical-align: middle">76.2</td>
324
  <td align="center" style="vertical-align: middle">73.1</td>
325
- <td align="center" style="vertical-align: middle">-</td>
326
- </tr>
327
- <tr>
328
- <td align="center" style="vertical-align: middle">SWE-Bench Pro</td>
329
- <td align="center" style="vertical-align: middle">50.7</td>
330
- <td align="center" style="vertical-align: middle">55.6</td>
331
- <td align="center" style="vertical-align: middle">55.4*</td>
332
- <td align="center" style="vertical-align: middle">-</td>
333
- <td align="center" style="vertical-align: middle">-</td>
334
- <td align="center" style="vertical-align: middle">-</td>
335
- </tr>
336
- <tr>
337
- <td align="center" style="vertical-align: middle">SWE-Bench Multilingual</td>
338
- <td align="center" style="vertical-align: middle">73.0</td>
339
- <td align="center" style="vertical-align: middle">72.0</td>
340
- <td align="center" style="vertical-align: middle">77.5</td>
341
- <td align="center" style="vertical-align: middle">65.0</td>
342
- <td align="center" style="vertical-align: middle">70.2</td>
343
- <td align="center" style="vertical-align: middle">-</td>
344
- </tr>
345
- <tr>
346
- <td align="center" style="vertical-align: middle">Terminal Bench 2.0</td>
347
- <td align="center" style="vertical-align: middle">50.8</td>
348
- <td align="center" style="vertical-align: middle">54.0</td>
349
- <td align="center" style="vertical-align: middle">59.3</td>
350
- <td align="center" style="vertical-align: middle">54.2</td>
351
- <td align="center" style="vertical-align: middle">46.4</td>
352
- <td align="center" style="vertical-align: middle">-</td>
353
- </tr>
354
- <tr>
355
- <td align="center" style="vertical-align: middle">PaperBench</td>
356
- <td align="center" style="vertical-align: middle">63.5</td>
357
- <td align="center" style="vertical-align: middle">63.7*</td>
358
- <td align="center" style="vertical-align: middle">72.9*</td>
359
- <td align="center" style="vertical-align: middle">-</td>
360
- <td align="center" style="vertical-align: middle">47.1</td>
361
- <td align="center" style="vertical-align: middle">-</td>
362
- </tr>
363
- <tr>
364
- <td align="center" style="vertical-align: middle">CyberGym</td>
365
- <td align="center" style="vertical-align: middle">41.3</td>
366
- <td align="center" style="vertical-align: middle">-</td>
367
- <td align="center" style="vertical-align: middle">50.6</td>
368
- <td align="center" style="vertical-align: middle">39.9*</td>
369
- <td align="center" style="vertical-align: middle">17.3*</td>
370
- <td align="center" style="vertical-align: middle">-</td>
371
- </tr>
372
- <tr>
373
- <td align="center" style="vertical-align: middle">SciCode</td>
374
- <td align="center" style="vertical-align: middle">48.7</td>
375
- <td align="center" style="vertical-align: middle">52.1</td>
376
- <td align="center" style="vertical-align: middle">49.5</td>
377
- <td align="center" style="vertical-align: middle">56.1</td>
378
- <td align="center" style="vertical-align: middle">38.9</td>
379
- <td align="center" style="vertical-align: middle">-</td>
380
- </tr>
381
- <tr>
382
- <td align="center" style="vertical-align: middle">OJBench (cpp)</td>
383
- <td align="center" style="vertical-align: middle">57.4</td>
384
- <td align="center" style="vertical-align: middle">-</td>
385
- <td align="center" style="vertical-align: middle">54.6*</td>
386
- <td align="center" style="vertical-align: middle">68.5*</td>
387
- <td align="center" style="vertical-align: middle">54.7*</td>
388
- <td align="center" style="vertical-align: middle">-</td>
389
  </tr>
390
  <tr>
391
  <td align="center" style="vertical-align: middle">LiveCodeBench (v6)</td>
392
- <td align="center" style="vertical-align: middle">85.0</td>
393
- <td align="center" style="vertical-align: middle">-</td>
394
- <td align="center" style="vertical-align: middle">82.2*</td>
395
- <td align="center" style="vertical-align: middle">87.4*</td>
396
  <td align="center" style="vertical-align: middle">83.3</td>
397
- <td align="center" style="vertical-align: middle">-</td>
398
- </tr>
399
- <tr>
400
- <td align="center" colspan=8><strong>Long Context</strong></td>
401
- </tr>
402
- <tr>
403
- <td align="center" style="vertical-align: middle">Longbench v2</td>
404
- <td align="center" style="vertical-align: middle">61.0</td>
405
- <td align="center" style="vertical-align: middle">54.5*</td>
406
- <td align="center" style="vertical-align: middle">64.4*</td>
407
- <td align="center" style="vertical-align: middle">68.2*</td>
408
- <td align="center" style="vertical-align: middle">59.8*</td>
409
- <td align="center" style="vertical-align: middle">-</td>
410
- </tr>
411
- <tr>
412
- <td align="center" style="vertical-align: middle">AA-LCR</td>
413
- <td align="center" style="vertical-align: middle">70.0</td>
414
- <td align="center" style="vertical-align: middle">72.3*</td>
415
- <td align="center" style="vertical-align: middle">71.3*</td>
416
- <td align="center" style="vertical-align: middle">65.3*</td>
417
- <td align="center" style="vertical-align: middle">64.3*</td>
418
- <td align="center" style="vertical-align: middle">-</td>
419
- <tr>
420
- <td align="center" colspan=8><strong>Agentic Search</strong></td>
421
- </tr>
422
- <tr>
423
- <td align="center" style="vertical-align: middle">BrowseComp</td>
424
- <td align="center" style="vertical-align: middle">60.6</td>
425
- <td align="center" style="vertical-align: middle" rowspan="2">65.8</td>
426
- <td align="center" style="vertical-align: middle">37.0</td>
427
- <td align="center" style="vertical-align: middle">37.8</td>
428
- <td align="center" style="vertical-align: middle">51.4</td>
429
- <td align="center" style="vertical-align: middle">-</td>
430
- </tr>
431
- <tr>
432
- <td align="center" style="vertical-align: middle">BrowseComp<br>(w/ctx manage)</td>
433
- <td align="center" style="vertical-align: middle">74.9</td>
434
- <td align="center" style="vertical-align: middle">57.8</td>
435
- <td align="center" style="vertical-align: middle">59.2</td>
436
- <td align="center" style="vertical-align: middle">67.6</td>
437
- <td align="center" style="vertical-align: middle">-</td>
438
- </tr>
439
- <tr>
440
- <td align="center" style="vertical-align: middle">BrowseComp<br>(Agent Swarm)</td>
441
- <td align="center" style="vertical-align: middle">78.4</td>
442
- <td align="center" style="vertical-align: middle">-</td>
443
- <td align="center" style="vertical-align: middle">-</td>
444
- <td align="center" style="vertical-align: middle">-</td>
445
- <td align="center" style="vertical-align: middle">-</td>
446
- <td align="center" style="vertical-align: middle">-</td>
447
- </tr>
448
- <tr>
449
- <td align="center" style="vertical-align: middle">WideSearch<br> (item-f1)</td>
450
- <td align="center" style="vertical-align: middle">72.7</td>
451
- <td align="center" style="vertical-align: middle">-</td>
452
- <td align="center" style="vertical-align: middle">76.2*</td>
453
- <td align="center" style="vertical-align: middle">57.0</td>
454
- <td align="center" style="vertical-align: middle">32.5*</td>
455
- <td align="center" style="vertical-align: middle">-</td>
456
- </tr>
457
- <tr>
458
- <td align="center" style="vertical-align: middle">WideSearch<br> (item-f1 Agent Swarm)</td>
459
- <td align="center" style="vertical-align: middle">79.0</td>
460
- <td align="center" style="vertical-align: middle">-</td>
461
- <td align="center" style="vertical-align: middle">-</td>
462
- <td align="center" style="vertical-align: middle">-</td>
463
- <td align="center" style="vertical-align: middle">-</td>
464
- <td align="center" style="vertical-align: middle">-</td>
465
- </tr>
466
- <tr>
467
- <td align="center" style="vertical-align: middle">DeepSearchQA</td>
468
- <td align="center" style="vertical-align: middle">77.1</td>
469
- <td align="center" style="vertical-align: middle">71.3*</td>
470
- <td align="center" style="vertical-align: middle">76.1*</td>
471
- <td align="center" style="vertical-align: middle">63.2*</td>
472
- <td align="center" style="vertical-align: middle">60.9*</td>
473
- <td align="center" style="vertical-align: middle">-</td>
474
  </tr>
475
  <tr>
476
- <td align="center" style="vertical-align: middle">FinSearchCompT2&T3</td>
477
- <td align="center" style="vertical-align: middle">67.8</td>
478
- <td align="center" style="vertical-align: middle">-</td>
479
- <td align="center" style="vertical-align: middle">66.2*</td>
480
- <td align="center" style="vertical-align: middle">49.9</td>
481
- <td align="center" style="vertical-align: middle">59.1*</td>
482
- <td align="center" style="vertical-align: middle">-</td>
483
  </tr>
484
  <tr>
485
- <td align="center" style="vertical-align: middle">Seal-0</td>
486
- <td align="center" style="vertical-align: middle">57.4</td>
487
- <td align="center" style="vertical-align: middle">45.0</td>
488
- <td align="center" style="vertical-align: middle">47.7*</td>
489
- <td align="center" style="vertical-align: middle">45.5*</td>
490
- <td align="center" style="vertical-align: middle">49.5*</td>
491
  <td align="center" style="vertical-align: middle">-</td>
492
  </tr>
493
  </tbody>
@@ -495,247 +139,68 @@ Kimi K2.5 is an open-source, native multimodal agentic model built through conti
495
  </div>
496
 
497
  <details>
498
- <summary><b>Footnotes</b></summary>
499
-
500
- 1. General Testing Details
501
- - We report results for Kimi K2.5 and DeepSeek-V3.2 with thinking mode enabled, Claude Opus 4.5 with extended thinking mode, GPT-5.2 with xhigh reasoning effort, and Gemini 3 Pro with a high thinking level. For vision benchmarks, we additionally report results for Qwen3-VL-235B-A22B-Thinking.
502
- - Unless otherwise specified, all Kimi K2.5 experiments were conducted with temperature = 1.0, top-p = 0.95, and a context length of 256k tokens.
503
- - Benchmarks without publicly available scores were re-evaluated under the same conditions used for Kimi K2.5 and are marked with an asterisk (*).
504
- - We could not evaluate GPT-5.2 xhigh on all benchmarks due to service stability issues. For benchmarks that were not tested, we mark them as "-".
505
- 2. Text and Reasoning
506
- - HLE, AIME 2025, HMMT 2025 (Feb), and GPQA-Diamond were evaluated with a maximum completion budget of 96k tokens.
507
- - Results for AIME and HMMT are averaged over 32 runs (avg@32); GPQA-Diamond over 8 runs (avg@8).
508
- - For HLE, we report scores on the full set (text & image). Kimi K2.5 scores 31.5 (text) and 21.3 (image) without tools, and 51.8 (text) and 39.8 (image) with tools. The DeepSeek-V3.2 score corresponds to its text-only subset (marked with †) . Hugging Face access was blocked to prevent potential data leakage. HLE with tools uses simple context management: once the context exceeds a threshold, only the latest round of tool messages is retained.
509
- 3. Tool-Augmented / Agentic Search
510
- - Kimi K2.5 was equipped with search, code-interpreter, and web-browsing tools for HLE with tools and all agentic search benchmarks.
511
- - Except for BrowseComp (where K2.5 and DeepSeek-V3.2 used the discard-all strategy), no context management was applied, and tasks exceeding the supported context length were directly counted as failed.
512
- - The test system prompts emphasize deep and proactive tool use, instructing models to reason carefully, leverage tools, and verify uncertain information. Full prompts will be provided in the technical report.
513
- - Results for Seal-0 and WideSearch are averaged over four runs (avg@4).
514
- 4. Vision Benchmarks
515
- - Max-tokens = 64k, averaged over three runs (avg@3).
516
- - ZeroBench (w/ tools) uses max-tokens-per-step = 24k and max-steps = 30 for multi-step reasoning.
517
- - MMMU-Pro follows the official protocol, preserving input order and prepending images.
518
- - GPT-5.2-xhigh had ~10% failure rate (no output despite 3 retries), treated as incorrect; reported scores likely underestimate true performance.
519
- - WorldVQA, a benchmark designed to evaluate atomic vision-centric world knowledge. Access WorldVQA at https://github.com/MoonshotAI/WorldVQA.
520
- - OmniDocBench Score is computed as (1 − normalized Levenshtein distance) × 100, where a higher score denotes superior accuracy.
521
- 5. Coding Tasks
522
- - Terminal-Bench 2.0 scores were obtained with the default agent framework (Terminus-2) and the provided JSON parser. In our implementation, we evaluated Terminal-Bench 2.0 under non-thinking mode. This choice was made because our current context management strategy for the thinking mode is incompatible with Terminus-2.
523
- - For the SWE-Bench series of evaluations (including verified, multilingual, and pro), we used an internally developed evaluation framework. This framework includes a minimal set of tools—bash tool, createfile tool, insert tool, view tool, strreplace tool, and submit tool—along with tailored system prompts designed for the tasks. The highest scores were achieved under non-thinking mode.
524
- - The score of Claude Opus 4.5 on CyberGym is reported under the non-thinking setting.
525
- - All reported scores of coding tasks are averaged over 5 independent runs.
526
- 6. Long-Context Benchmarks
527
- - AA-LCR: scores averaged over three runs (avg@3).
528
- - LongBench-V2: identical prompts and input contexts standardized to ~128k tokens.
529
- 7. Agent Swarm
530
- - BrowseComp (Swarm Mode): main agent max 15 steps; sub-agents max 100 steps.
531
- - WideSearch (Swarm Mode): main and sub-agents max 100 steps.
532
-
533
- </details>
534
-
535
- ## 4. Native INT4 Quantization
536
- Kimi-K2.5 adopts the same native int4 quantization method as [Kimi-K2-Thinking](https://huggingface.co/moonshotai/Kimi-K2-Thinking#4-native-int4-quantization).
537
-
538
- ## 5. Deployment
539
- > [!Note]
540
- > You can access Kimi-K2.5's API on https://platform.moonshot.ai and we provide OpenAI/Anthropic-compatible API for you. To verify the deployment is correct, we also provide the [Kimi Vendor Verifier](https://kimi.com/blog/kimi-vendor-verifier.html).
541
- Currently, Kimi-K2.5 is recommended to run on the following inference engines:
542
- * vLLM
543
- * SGLang
544
- * KTransformers
545
-
546
- The minimum version requirement for `transformers` is `4.57.1`.
547
-
548
- Deployment examples can be found in the [Model Deployment Guide](docs/deploy_guidance.md).
549
-
550
-
551
- ---
552
- ## 6. Model Usage
553
-
554
- The usage demos below demonstrate how to call our official API.
555
-
556
- For third-party APIs deployed with vLLM or SGLang, please note that:
557
- > [!Note]
558
- > - Chat with video content is an experimental feature and is only supported in our official API for now.
559
- >
560
- > - The recommended `temperature` will be `1.0` for Thinking mode and `0.6` for Instant mode.
561
- >
562
- > - The recommended `top_p` is `0.95`.
563
- >
564
- > - To use instant mode, you need to pass `{'chat_template_kwargs': {"thinking": False}}` in `extra_body`.
565
 
566
- ### Chat Completion
 
 
567
 
568
- This is a simple chat completion script which shows how to call K2.5 API in Thinking and Instant modes.
569
 
570
- ```python
571
- import openai
572
- import base64
573
- import requests
574
- def simple_chat(client: openai.OpenAI, model_name: str):
575
- messages = [
576
- {'role': 'system', 'content': 'You are Kimi, an AI assistant created by Moonshot AI.'},
577
- {
578
- 'role': 'user',
579
- 'content': [
580
- {'type': 'text', 'text': 'which one is bigger, 9.11 or 9.9? think carefully.'}
581
- ],
582
- },
583
- ]
584
- response = client.chat.completions.create(
585
- model=model_name, messages=messages, stream=False, max_tokens=4096
586
- )
587
- print('====== Below is reasoning_content in Thinking Mode ======')
588
- print(f'reasoning content: {response.choices[0].message.reasoning_content}')
589
- print('====== Below is response in Thinking Mode ======')
590
- print(f'response: {response.choices[0].message.content}')
591
-
592
- # To use instant mode, pass {"thinking" = {"type":"disabled"}}
593
- response = client.chat.completions.create(
594
- model=model_name,
595
- messages=messages,
596
- stream=False,
597
- max_tokens=4096,
598
- extra_body={'thinking': {'type': 'disabled'}}, # this is for official API
599
- # extra_body= {'chat_template_kwargs': {"thinking": False}} # this is for vLLM/SGLang
600
- )
601
- print('====== Below is response in Instant Mode ======')
602
- print(f'response: {response.choices[0].message.content}')
603
- ```
604
 
 
605
 
606
- ### Chat Completion with visual content
607
 
608
- K2.5 supports Image and Video input.
 
 
 
609
 
610
- The following example demonstrates how to call K2.5 API with image input:
611
 
612
- ```python
613
- import openai
614
- import base64
615
- import requests
616
-
617
- def chat_with_image(client: openai.OpenAI, model_name: str):
618
- url = 'https://huggingface.co/moonshotai/Kimi-K2.5/resolve/main/figures/kimi-logo.png'
619
- image_base64 = base64.b64encode(requests.get(url).content).decode()
620
- messages = [
621
- {
622
- 'role': 'user',
623
- 'content': [
624
- {'type': 'text', 'text': 'Describe this image in detail.'},
625
- {
626
- 'type': 'image_url',
627
- 'image_url': {'url': f'data:image/png;base64, {image_base64}'},
628
- },
629
- ],
630
- }
631
- ]
632
-
633
- response = client.chat.completions.create(
634
- model=model_name, messages=messages, stream=False, max_tokens=8192
635
- )
636
- print('====== Below is reasoning_content in Thinking Mode ======')
637
- print(f'reasoning content: {response.choices[0].message.reasoning_content}')
638
- print('====== Below is response in Thinking Mode ======')
639
- print(f'response: {response.choices[0].message.content}')
640
-
641
- # Also support instant mode if you pass {"thinking" = {"type":"disabled"}}
642
- response = client.chat.completions.create(
643
- model=model_name,
644
- messages=messages,
645
- stream=False,
646
- max_tokens=4096,
647
- extra_body={'thinking': {'type': 'disabled'}}, # this is for official API
648
- # extra_body= {'chat_template_kwargs': {"thinking": False}} # this is for vLLM/SGLang
649
- )
650
- print('====== Below is response in Instant Mode ======')
651
- print(f'response: {response.choices[0].message.content}')
652
-
653
- return response.choices[0].message.content
654
- ```
655
 
656
- The following example demonstrates how to call K2.5 API with video input:
657
 
658
  ```python
659
- import openai
660
- import base64
661
- import requests
662
-
663
- def chat_with_video(client: openai.OpenAI, model_name:str):
664
- url = 'https://huggingface.co/moonshotai/Kimi-K2.5/resolve/main/figures/demo_video.mp4'
665
- video_base64 = base64.b64encode(requests.get(url).content).decode()
666
- messages = [
667
- {
668
- "role": "user",
669
- "content": [
670
- {"type": "text","text": "Describe the video in detail."},
671
- {
672
- "type": "video_url",
673
- "video_url": {"url": f"data:video/mp4;base64,{video_base64}"},
674
- },
675
- ],
676
- }
677
- ]
678
-
679
- response = client.chat.completions.create(model=model_name, messages=messages)
680
- print('====== Below is reasoning_content in Thinking Mode ======')
681
- print(f'reasoning content: {response.choices[0].message.reasoning_content}')
682
- print('====== Below is response in Thinking Mode ======')
683
- print(f'response: {response.choices[0].message.content}')
684
-
685
- # Also support instant mode if pass {"thinking" = {"type":"disabled"}}
686
- response = client.chat.completions.create(
687
- model=model_name,
688
- messages=messages,
689
- stream=False,
690
- max_tokens=4096,
691
- extra_body={'thinking': {'type': 'disabled'}}, # this is for official API
692
- # extra_body= {'chat_template_kwargs': {"thinking": False}} # this is for vLLM/SGLang
693
- )
694
- print('====== Below is response in Instant Mode ======')
695
- print(f'response: {response.choices[0].message.content}')
696
- return response.choices[0].message.content
697
  ```
698
 
699
- ### Interleaved Thinking and Multi-Step Tool Call
700
-
701
- K2.5 shares the same design of Interleaved Thinking and Multi-Step Tool Call as K2 Thinking. For usage example, please refer to the [K2 Thinking documentation](https://platform.moonshot.ai/docs/guide/use-kimi-k2-thinking-model#complete-example).
702
-
703
 
704
- ### Coding Agent Framework
705
-
706
- Kimi K2.5 works best with Kimi Code CLI as its agent framework — give it a try at https://www.kimi.com/code.
707
-
708
-
709
- ---
710
-
711
- ## 7. License
712
-
713
- Both the code repository and the model weights are released under the [Modified MIT License](LICENSE).
714
 
715
  ---
716
 
717
- ## 8. Third Party Notices
718
-
719
- See [THIRD PARTY NOTICES](THIRD_PARTY_NOTICES.md)
720
-
721
- ---
722
-
723
- ## 9. Contact Us
724
-
725
- If you have any questions, please reach out at [support@moonshot.cn](mailto:support@moonshot.cn).
726
 
727
- ## 10. Reference
728
 
729
- If you find K2.5 useful for your research, please kindly cite K2.5 technical report as follows:
730
 
731
  ```bibtex
732
- @misc{kimiteam2026kimik25visualagentic,
733
- title={Kimi K2.5: Visual Agentic Intelligence},
734
- author={Kimi Team and Tongtong Bai and Yifan Bai and Yiping Bao and S. H. Cai and Yuan Cao and Y. Charles and H. S. Che and Cheng Chen and Guanduo Chen and Huarong Chen and Jia Chen and Jiahao Chen and Jianlong Chen and Jun Chen and Kefan Chen and Liang Chen and Ruijue Chen and Xinhao Chen and Yanru Chen and Yanxu Chen and Yicun Chen and Yimin Chen and Yingjiang Chen and Yuankun Chen and Yujie Chen and Yutian Chen and Zhirong Chen and Ziwei Chen and Dazhi Cheng and Minghan Chu and Jialei Cui and Jiaqi Deng and Muxi Diao and Hao Ding and Mengfan Dong and Mengnan Dong and Yuxin Dong and Yuhao Dong and Angang Du and Chenzhuang Du and Dikang Du and Lingxiao Du and Yulun Du and Yu Fan and Shengjun Fang and Qiulin Feng and Yichen Feng and Garimugai Fu and Kelin Fu and Hongcheng Gao and Tong Gao and Yuyao Ge and Shangyi Geng and Chengyang Gong and Xiaochen Gong and Zhuoma Gongque and Qizheng Gu and Xinran Gu and Yicheng Gu and Longyu Guan and Yuanying Guo and Xiaoru Hao and Weiran He and Wenyang He and Yunjia He and Chao Hong and Hao Hu and Jiaxi Hu and Yangyang Hu and Zhenxing Hu and Ke Huang and Ruiyuan Huang and Weixiao Huang and Zhiqi Huang and Tao Jiang and Zhejun Jiang and Xinyi Jin and Yu Jing and Guokun Lai and Aidi Li and C. Li and Cheng Li and Fang Li and Guanghe Li and Guanyu Li and Haitao Li and Haoyang Li and Jia Li and Jingwei Li and Junxiong Li and Lincan Li and Mo Li and Weihong Li and Wentao Li and Xinhang Li and Xinhao Li and Yang Li and Yanhao Li and Yiwei Li and Yuxiao Li and Zhaowei Li and Zheming Li and Weilong Liao and Jiawei Lin and Xiaohan Lin and Zhishan Lin and Zichao Lin and Cheng Liu and Chenyu Liu and Hongzhang Liu and Liang Liu and Shaowei Liu and Shudong Liu and Shuran Liu and Tianwei Liu and Tianyu Liu and Weizhou Liu and Xiangyan Liu and Yangyang Liu and Yanming Liu and Yibo Liu and Yuanxin Liu and Yue Liu and Zhengying Liu and Zhongnuo Liu and Enzhe Lu and Haoyu Lu and Zhiyuan Lu and Junyu Luo and Tongxu Luo and Yashuo Luo and Long Ma and Yingwei Ma and Shaoguang Mao and Yuan Mei and Xin Men and Fanqing Meng and Zhiyong Meng and Yibo Miao and Minqing Ni and Kun Ouyang and Siyuan Pan and Bo Pang and Yuchao Qian and Ruoyu Qin and Zeyu Qin and Jiezhong Qiu and Bowen Qu and Zeyu Shang and Youbo Shao and Tianxiao Shen and Zhennan Shen and Juanfeng Shi and Lidong Shi and Shengyuan Shi and Feifan Song and Pengwei Song and Tianhui Song and Xiaoxi Song and Hongjin Su and Jianlin Su and Zhaochen Su and Lin Sui and Jinsong Sun and Junyao Sun and Tongyu Sun and Flood Sung and Yunpeng Tai and Chuning Tang and Heyi Tang and Xiaojuan Tang and Zhengyang Tang and Jiawen Tao and Shiyuan Teng and Chaoran Tian and Pengfei Tian and Ao Wang and Bowen Wang and Chensi Wang and Chuang Wang and Congcong Wang and Dingkun Wang and Dinglu Wang and Dongliang Wang and Feng Wang and Hailong Wang and Haiming Wang and Hengzhi Wang and Huaqing Wang and Hui Wang and Jiahao Wang and Jinhong Wang and Jiuzheng Wang and Kaixin Wang and Linian Wang and Qibin Wang and Shengjie Wang and Shuyi Wang and Si Wang and Wei Wang and Xiaochen Wang and Xinyuan Wang and Yao Wang and Yejie Wang and Yipu Wang and Yiqin Wang and Yucheng Wang and Yuzhi Wang and Zhaoji Wang and Zhaowei Wang and Zhengtao Wang and Zhexu Wang and Zihan Wang and Zizhe Wang and Chu Wei and Ming Wei and Chuan Wen and Zichen Wen and Chengjie Wu and Haoning Wu and Junyan Wu and Rucong Wu and Wenhao Wu and Yuefeng Wu and Yuhao Wu and Yuxin Wu and Zijian Wu and Chenjun Xiao and Jin Xie and Xiaotong Xie and Yuchong Xie and Yifei Xin and Bowei Xing and Boyu Xu and Jianfan Xu and Jing Xu and Jinjing Xu and L. H. Xu and Lin Xu and Suting Xu and Weixin Xu and Xinbo Xu and Xinran Xu and Yangchuan Xu and Yichang Xu and Yuemeng Xu and Zelai Xu and Ziyao Xu and Junjie Yan and Yuzi Yan and Guangyao Yang and Hao Yang and Junwei Yang and Kai Yang and Ningyuan Yang and Ruihan Yang and Xiaofei Yang and Xinlong Yang and Ying Yang and Yi Yang and Yi Yang and Zhen Yang and Zhilin Yang and Zonghan Yang and Haotian Yao and Dan Ye and Wenjie Ye and Zhuorui Ye and Bohong Yin and Chengzhen Yu and Longhui Yu and Tao Yu and Tianxiang Yu and Enming Yuan and Mengjie Yuan and Xiaokun Yuan and Yang Yue and Weihao Zeng and Dunyuan Zha and Haobing Zhan and Dehao Zhang and Hao Zhang and Jin Zhang and Puqi Zhang and Qiao Zhang and Rui Zhang and Xiaobin Zhang and Y. Zhang and Yadong Zhang and Yangkun Zhang and Yichi Zhang and Yizhi Zhang and Yongting Zhang and Yu Zhang and Yushun Zhang and Yutao Zhang and Yutong Zhang and Zheng Zhang and Chenguang Zhao and Feifan Zhao and Jinxiang Zhao and Shuai Zhao and Xiangyu Zhao and Yikai Zhao and Zijia Zhao and Huabin Zheng and Ruihan Zheng and Shaojie Zheng and Tengyang Zheng and Junfeng Zhong and Longguang Zhong and Weiming Zhong and M. Zhou and Runjie Zhou and Xinyu Zhou and Zaida Zhou and Jinguo Zhu and Liya Zhu and Xinhao Zhu and Yuxuan Zhu and Zhen Zhu and Jingze Zhuang and Weiyu Zhuang and Ying Zou and Xinxing Zu},
735
  year={2026},
736
- eprint={2602.02276},
737
- archivePrefix={arXiv},
738
- primaryClass={cs.CL},
739
- url={https://arxiv.org/abs/2602.02276},
740
  }
741
  ```
 
1
  ---
2
  tags:
3
+ - moe
4
+ - agentic
5
+ - backend-engineering
6
+ license: apache-2.0
7
  library_name: transformers
8
+ pipeline_tag: text-generation
9
  paper: arxiv.org/abs/2602.02276
10
  ---
11
  <div align="center">
12
  <picture>
13
+ <img src="figures/kirim-v3-logo.png" width="30%" alt="Kirim V3">
14
  </picture>
15
  </div>
16
  <hr>
17
  <div align="center" style="line-height:1">
18
+ <a href="https://www.kirim-ai.kom" target="_blank"><img alt="Launch" src="https://img.shields.io/badge/⚡%20Launch-Kirim%20V3-1783ff?logoColor=white"/></a>
19
+ <a href="https://github.com/kirim-ai/Kirim-V3"><img alt="Source" src="https://img.shields.io/badge/Source-Kirim%20V3-181717?logo=github&logoColor=white"/></a>
20
+ <a href="https://www.kirim-ai.kom" target="_blank"><img alt="Enterprise" src="https://img.shields.io/badge/Enterprise-Kirim%20AI-000000?logo=blueprint&logoColor=white"/></a>
21
  </div>
22
 
23
  <div align="center" style="line-height: 1;">
24
+ <a href="https://huggingface.co/kirim-ai" target="_blank"><img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Artifacts-Kirim%20AI-ffc107?logoColor=white"/></a>
25
+ <a href="https://twitter.com/kirim_ai" target="_blank"><img alt="X" src="https://img.shields.io/badge/Social-Kirim.ai-000000?logo=x&logoColor=white"/></a>
26
+ <a href="https://discord.gg/kirim-ai" target="_blank"><img alt="Community" src="https://img.shields.io/badge/Community-Discord-5865F2?logo=discord&logoColor=white"/></a>
27
  </div>
28
  <div align="center" style="line-height: 1;">
29
+ <a href="LICENSE"><img alt="License" src="https://img.shields.io/badge/License-Apache_2.0-3b8739?logo=apache&logoColor=white"/></a>
30
  </div>
31
  <p align="center">
32
+ <b>�&nbsp;&nbsp;<a href="https://www.kirim-ai.kom/blog/kirim-v3.html">Engineering Blog</a></b> &nbsp;&nbsp;&nbsp; | &nbsp;&nbsp;&nbsp; <b>📄&nbsp;&nbsp;<a href="https://arxiv.org/abs/2602.02276">Technical Paper</a></b>
33
  </p>
 
34
 
35
+ ## 0. Version History
36
+ - **2026.02.12 (Current)**:
37
+ - Deployment of Kirim-V3 (102B Base).
38
+ - Introduction of Native Agentic Reasoning pathways.
39
+ - Integration of native 4-bit quantization support for scalable enterprise serving.
40
 
41
+ ## 1. Introduction
42
 
43
+ Kirim-V3 is a frontier-scale, native agentic intelligence system. Engineered from a foundational pool of 15 trillion specialized tokens, it represents the pinnacle of **High Mixture-of-Experts (MoE)** architecture designed specifically for professional backend engineering and autonomous system orchestration.
44
 
45
+ Kirim-V3 bridges the gap between traditional conversational models and actionable, state-aware agent swarms, providing a unified interface for both instant logical execution and deep architectural reasoning.
 
 
 
46
 
47
+ ### Core Capabilities
48
+ - **Reasoning-First Architecture**: Unlike general-purpose LLMs, Kirim-V3 is pre-trained on deep-reasoning traces, enabling it to solve high-entropy system design challenges and complex logic puzzles.
49
+ - **Backend Optimization**: Specialized pathways for high-performance languages (Rust, Go, C++) and modern cloud infrastructure (Kubernetes, AWS/GCP architecture).
50
+ - **Autonomous Multi-Agent Coordination**: Native support for task decomposition, allowing a single Kirim-V3 instance to spin up, monitor, and merge results from specialized sub-agent routines.
51
+
52
+ ## 2. Technical Profile
53
 
54
  <div align="center">
55
 
56
+ | Metric | Detail |
57
+ | :--- | :--- |
58
+ | **Model Type** | High-Efficacy Sparse Mixture-of-Experts |
59
+ | **Top-Level Parameters** | 102B |
60
+ | **Activated Parameters** | 28B |
61
+ | **Depth** | 80 Layers |
62
+ | **Expert Routing** | Top-4 Gating |
63
+ | **Total Experts** | 64 per MoE Layer |
64
+ | **Contextual Window** | 256K Tokens |
65
+ | **Architecture Base** | MLA with SwiGLU Activations |
66
+ | **Precision** | Native BF16 / INT4 Support |
67
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68
  </div>
69
 
70
+ ## 3. Benchmarking & Performance
 
71
 
72
+ Kirim-V3 sets a new precedent for coding and reasoning, outperforming traditional frontier models in backend-specific evaluations.
73
 
74
  <div align="center">
75
  <table>
76
  <thead>
77
  <tr>
78
  <th align="center">Benchmark</th>
79
+ <th align="center"><sup>Kirim V3<br><sup>(Deep Reasoning)</sup></sup></th>
80
+ <th align="center"><sup>Claude 4.5 Sonnet<br><sup>(Extended Thinking)</sup></sup></th>
81
+ <th align="center"><sup>ChatGPT 5.1 Codex<br><sup>(Reasoning)</sup></sup></th>
 
82
  <th align="center"><sup>DeepSeek V3.2 <br><sup>(Thinking)</sup></sup></th>
 
83
  </tr>
84
  </thead>
85
  <tbody>
86
  <tr>
87
+ <td align="center" colspan=8><strong>Reasoning &amp; Logic</strong></td>
 
 
 
 
 
 
 
 
 
88
  </tr>
89
  <tr>
90
+ <td align="center" style="vertical-align: middle">HLE-Full (Pass@1)</td>
91
+ <td align="center" style="vertical-align: middle">33.2</td>
92
+ <td align="center" style="vertical-align: middle">34.1</td>
93
+ <td align="center" style="vertical-align: middle">31.8</td>
94
+ <td align="center" style="vertical-align: middle">25.1</td>
 
 
95
  </tr>
96
  <tr>
97
+ <td align="center" style="vertical-align: middle">AIME 2025 (0-Shot)</td>
98
+ <td align="center" style="vertical-align: middle">97.8</td>
99
+ <td align="center" style="vertical-align: middle">98.2</td>
100
+ <td align="center" style="vertical-align: middle">95.5</td>
 
101
  <td align="center" style="vertical-align: middle">93.1</td>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
102
  </tr>
103
  <tr>
104
  <td align="center" style="vertical-align: middle">MMLU-Pro</td>
105
+ <td align="center" style="vertical-align: middle">89.4</td>
 
 
106
  <td align="center" style="vertical-align: middle">90.1</td>
107
+ <td align="center" style="vertical-align: middle">87.2</td>
108
  <td align="center" style="vertical-align: middle">85.0</td>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
109
  </tr>
110
  <tr>
111
+ <td align="center" colspan=8><strong>Engineered Intelligence (Coding)</strong></td>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
112
  </tr>
113
  <tr>
114
  <td align="center" style="vertical-align: middle">SWE-Bench Verified</td>
115
+ <td align="center" style="vertical-align: middle">83.9</td>
116
+ <td align="center" style="vertical-align: middle">84.2</td>
117
+ <td align="center" style="vertical-align: middle">81.1</td>
 
118
  <td align="center" style="vertical-align: middle">73.1</td>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
119
  </tr>
120
  <tr>
121
  <td align="center" style="vertical-align: middle">LiveCodeBench (v6)</td>
122
+ <td align="center" style="vertical-align: middle">90.5</td>
123
+ <td align="center" style="vertical-align: middle">91.1</td>
124
+ <td align="center" style="vertical-align: middle">88.4</td>
 
125
  <td align="center" style="vertical-align: middle">83.3</td>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
126
  </tr>
127
  <tr>
128
+ <td align="center" colspan=8><strong>Agentic Workflows</strong></td>
 
 
 
 
 
 
129
  </tr>
130
  <tr>
131
+ <td align="center" style="vertical-align: middle">BrowseComp (Swarm)</td>
132
+ <td align="center" style="vertical-align: middle">80.2</td>
133
+ <td align="center" style="vertical-align: middle">81.1</td>
134
+ <td align="center" style="vertical-align: middle">77.4</td>
 
 
135
  <td align="center" style="vertical-align: middle">-</td>
136
  </tr>
137
  </tbody>
 
139
  </div>
140
 
141
  <details>
142
+ <summary><b>Methodology & Footnotes</b></summary>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
143
 
144
+ - **Test Environment**: Evaluations performed on Kirim Inference Cluster (8x H200 nodes).
145
+ - **Configuration**: Models tested with 256k context saturation. Kirim-V3 used "Deep Reasoning" mode (T=1.0, P=0.95).
146
+ - **Swarm Metrics**: Swarm mode involves the main agent delegating to parallel sub-routines (max 15 main steps, 100 sub-steps).
147
 
148
+ </details>
149
 
150
+ ## 4. Systems Architecture
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
151
 
152
+ Kirim-V3 implements **Interleaved Architectural Reasoning**. In "Deep Reasoning" mode, the model generates intermediate tactical thoughts (tags: `<thought>`) before executing code or tool calls. This allows for complex, multi-step validation of assumptions during the generation process.
153
 
154
+ ## 5. Deployment
155
 
156
+ Kirim-V3 is compatible with enterprise-grade inference engines:
157
+ * **vLLM** (>= 0.5.0)
158
+ * **SGLang** (Production Cluster Ready)
159
+ * **KTransformers** (Advanced MoE Optimizations)
160
 
161
+ Standard integration requires `transformers >= 4.57.1`. See the [Installation Guide](docs/deployment_vllm.md) for full implementation details.
162
 
163
+ ## 6. Usage Example
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
164
 
165
+ Kirim-V3 supports standard API interactions. Below is a production-style integration example.
166
 
167
  ```python
168
+ from kirim import KirimAI
169
+
170
+ # Initialize secure client
171
+ client = KirimAI(api_key="your_token", org_id="your_org")
172
+
173
+ # Design & Build Workflow
174
+ instruction = "Architect a zero-trust authentication layer in Rust with JWT and Redis."
175
+ workflow = client.execute(
176
+ instruction,
177
+ mode="deep_reasoning",
178
+ max_tokens=8192
179
+ )
180
+
181
+ # Access reasoning traces
182
+ print(f"Strategic Plan: {workflow.thought_trace}")
183
+ # Access implementation
184
+ print(f"Implementation: {workflow.artifact}")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
185
  ```
186
 
187
+ ## 7. Licensing
 
 
 
188
 
189
+ Code components and model weights are distributed under the **Apache License 2.0**.
 
 
 
 
 
 
 
 
 
190
 
191
  ---
192
 
193
+ ## 8. Attribution
 
 
 
 
 
 
 
 
194
 
195
+ Refer to [THIRD PARTY NOTICES](THIRD_PARTY_NOTICES.md) for open-source dependency information.
196
 
197
+ ## 9. Citation
198
 
199
  ```bibtex
200
+ @misc{kirim2026kirimv3,
201
+ title={Kirim-V3: Frontier Agentic Intelligence for System Engineering},
202
+ author={Kirim AI Research},
203
  year={2026},
204
+ publisher={Kirim AI}
 
 
 
205
  }
206
  ```