Revised K2 Think naming
Browse files
README.md
CHANGED
|
@@ -8,7 +8,7 @@ license: apache-2.0
|
|
| 8 |
pipeline_tag: text-generation
|
| 9 |
---
|
| 10 |
|
| 11 |
-
# K2
|
| 12 |
|
| 13 |
📚 [Paper]() - 📝 [Code](https://github.com/LLM360/Reasoning360) - 🏢 [Project Page](https://k2think.ai)
|
| 14 |
|
|
@@ -16,7 +16,7 @@ pipeline_tag: text-generation
|
|
| 16 |
|
| 17 |
<br>
|
| 18 |
|
| 19 |
-
K2
|
| 20 |
|
| 21 |
# Quickstart
|
| 22 |
|
|
@@ -36,7 +36,7 @@ We use the following serving configurations:
|
|
| 36 |
The provided chat template sets the reasoning effort to `high`
|
| 37 |
|
| 38 |
### Transformers
|
| 39 |
-
You can use `K2
|
| 40 |
|
| 41 |
The chat template is directly inherited from K2-V2-Instruct, with the default `reasoning_effort` set to `"high"`. The other levels of reasoning effort (`"low"` and `"medium"`) are still available but have not been tested or evaluated. As such, the model's behavior under such settings is not assured to maintain reported performance.
|
| 42 |
|
|
@@ -44,7 +44,7 @@ The chat template is directly inherited from K2-V2-Instruct, with the default `r
|
|
| 44 |
from transformers import pipeline
|
| 45 |
import torch
|
| 46 |
|
| 47 |
-
model_id = "LLM360/K2-Think-
|
| 48 |
|
| 49 |
pipe = pipeline(
|
| 50 |
"text-generation",
|
|
@@ -75,7 +75,7 @@ client = OpenAI(
|
|
| 75 |
)
|
| 76 |
|
| 77 |
completion = client.chat.completions.create(
|
| 78 |
-
model="LLM360/K2-Think-
|
| 79 |
messages = [
|
| 80 |
{"role": "system", "content": "You are K2-Think, a helpful assistant created by Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) Institute of Foundation Models (IFM)."},
|
| 81 |
{"role": "user", "content": "Solve the 24 game [2, 3, 5, 6]"}
|
|
@@ -93,7 +93,7 @@ A summary of evaluation results are reported in our [Blog]()
|
|
| 93 |
|
| 94 |
## Benchmarks (pass\@1, average over 16 runs)
|
| 95 |
|
| 96 |
-
| Domain | Benchmark | K2
|
| 97 |
| ------- | -------------------- | -----------: |
|
| 98 |
| Math | AIME 2025 | 90.42 |
|
| 99 |
| Math | HMMT 2025 | 84.79 |
|
|
@@ -105,7 +105,7 @@ A summary of evaluation results are reported in our [Blog]()
|
|
| 105 |
|
| 106 |
<!-- ## Inference Speed
|
| 107 |
|
| 108 |
-
We deploy K2
|
| 109 |
|
| 110 |
| Platform | Throughput (tokens/sec) | Example: 32k-token response (time) |
|
| 111 |
| --------------------------------- | ----------------------: | ---------------------------------: |
|
|
@@ -144,12 +144,12 @@ We have employed various techniques to reduce bias, harmful outputs, and other r
|
|
| 144 |
---
|
| 145 |
|
| 146 |
# Citation
|
| 147 |
-
If you use K2
|
| 148 |
|
| 149 |
```bibtex
|
| 150 |
-
@misc{
|
| 151 |
-
title={K2
|
| 152 |
-
author={K2
|
| 153 |
year={2026},
|
| 154 |
url={https://tbd.org},
|
| 155 |
}
|
|
|
|
| 8 |
pipeline_tag: text-generation
|
| 9 |
---
|
| 10 |
|
| 11 |
+
# K2 Think (Jan '26): A Fully-Sovereign Reasoning System
|
| 12 |
|
| 13 |
📚 [Paper]() - 📝 [Code](https://github.com/LLM360/Reasoning360) - 🏢 [Project Page](https://k2think.ai)
|
| 14 |
|
|
|
|
| 16 |
|
| 17 |
<br>
|
| 18 |
|
| 19 |
+
K2 Think (Jan '26') is a 70 billion parameter open-weights general reasoning model with strong performance in competitive mathematical problem solving built on-top of [K2-V2-Instruct](huggingface.co/LLM360/K2-V2-Instruct), comprising a fully sovereign reasoning system.
|
| 20 |
|
| 21 |
# Quickstart
|
| 22 |
|
|
|
|
| 36 |
The provided chat template sets the reasoning effort to `high`
|
| 37 |
|
| 38 |
### Transformers
|
| 39 |
+
You can use `K2 Think (Jan '26)` with Transformers. If you use `transformers.pipeline`, it will apply the chat template automatically. If you use `model.generate` directly, you need to apply the chat template mannually.
|
| 40 |
|
| 41 |
The chat template is directly inherited from K2-V2-Instruct, with the default `reasoning_effort` set to `"high"`. The other levels of reasoning effort (`"low"` and `"medium"`) are still available but have not been tested or evaluated. As such, the model's behavior under such settings is not assured to maintain reported performance.
|
| 42 |
|
|
|
|
| 44 |
from transformers import pipeline
|
| 45 |
import torch
|
| 46 |
|
| 47 |
+
model_id = "LLM360/K2-Think-0126"
|
| 48 |
|
| 49 |
pipe = pipeline(
|
| 50 |
"text-generation",
|
|
|
|
| 75 |
)
|
| 76 |
|
| 77 |
completion = client.chat.completions.create(
|
| 78 |
+
model="LLM360/K2-Think-0126",
|
| 79 |
messages = [
|
| 80 |
{"role": "system", "content": "You are K2-Think, a helpful assistant created by Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) Institute of Foundation Models (IFM)."},
|
| 81 |
{"role": "user", "content": "Solve the 24 game [2, 3, 5, 6]"}
|
|
|
|
| 93 |
|
| 94 |
## Benchmarks (pass\@1, average over 16 runs)
|
| 95 |
|
| 96 |
+
| Domain | Benchmark | K2 Think (Jan '26) |
|
| 97 |
| ------- | -------------------- | -----------: |
|
| 98 |
| Math | AIME 2025 | 90.42 |
|
| 99 |
| Math | HMMT 2025 | 84.79 |
|
|
|
|
| 105 |
|
| 106 |
<!-- ## Inference Speed
|
| 107 |
|
| 108 |
+
We deploy K2 THINK (Jan '26) on Cerebras Wafer-Scale Engine (WSE) systems, leveraging the world’s largest processor and speculative decoding to achieve unprecedented inference speeds for our 32B reasoning system.
|
| 109 |
|
| 110 |
| Platform | Throughput (tokens/sec) | Example: 32k-token response (time) |
|
| 111 |
| --------------------------------- | ----------------------: | ---------------------------------: |
|
|
|
|
| 144 |
---
|
| 145 |
|
| 146 |
# Citation
|
| 147 |
+
If you use K2 Think (Jan '26) in your research, please use the following citation:
|
| 148 |
|
| 149 |
```bibtex
|
| 150 |
+
@misc{k2thinkteam2026k2think0126,
|
| 151 |
+
title={K2 {T}hink ({Jan} '26): A Fully-Sovereign Reasoning System},
|
| 152 |
+
author={K2 Think Team and Taylor W. Killian and Varad Pimpalkhute and Richard Fan and Haonan Li and Chengqian Gao and Ming Shan Hee and Xudong Han and John Maggs and Guowei He and Zhengzhong Liu and Eric P. Xing},
|
| 153 |
year={2026},
|
| 154 |
url={https://tbd.org},
|
| 155 |
}
|