Adding chat_template information
Browse filesAdded text to provide information about how we set-up the `reasoning_effort` parameter from K2-V2.
README.md
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
---
|
| 2 |
base_model:
|
| 3 |
-
- LLM360/K2-V2
|
| 4 |
language:
|
| 5 |
- en
|
| 6 |
library_name: transformers
|
|
@@ -16,13 +16,15 @@ pipeline_tag: text-generation
|
|
| 16 |
|
| 17 |
<br>
|
| 18 |
|
| 19 |
-
K2-Think (70B) is a 70 billion parameter open-weights general reasoning model with strong performance in competitive mathematical problem solving built on-top of [K2-V2](huggingface.co/LLM360/K2-V2), comprising a fully sovereign reasoning system.
|
| 20 |
|
| 21 |
# Quickstart
|
| 22 |
|
| 23 |
### Transformers
|
| 24 |
You can use `K2-Think (70B)` with Transformers. If you use `transformers.pipeline`, it will apply the chat template automatically. If you use `model.generate` directly, you need to apply the chat template mannually.
|
| 25 |
|
|
|
|
|
|
|
| 26 |
```python
|
| 27 |
from transformers import pipeline
|
| 28 |
import torch
|
|
@@ -47,6 +49,28 @@ outputs = pipe(
|
|
| 47 |
print(outputs[0]["generated_text"][-1])
|
| 48 |
```
|
| 49 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 50 |
---
|
| 51 |
|
| 52 |
# Evaluation & Performance
|
|
@@ -110,7 +134,7 @@ If you use K2-Think (70B) in your research, please use the following citation:
|
|
| 110 |
```bibtex
|
| 111 |
@misc{k2thinkteam2026k2think70B,
|
| 112 |
title={K2-{T}hink 70{B}: A Fully-Sovereign Reasoning System},
|
| 113 |
-
author={K2-Think Team and Taylor W. Killian and Varad Pimpalkhute and Richard Fan and Haonan Li and Chengqian Gao and Ming Shan Hee and John Maggs and Guowei He and Zhengzhong Liu and Eric P. Xing},
|
| 114 |
year={2026},
|
| 115 |
url={https://tbd.org},
|
| 116 |
}
|
|
|
|
| 1 |
---
|
| 2 |
base_model:
|
| 3 |
+
- LLM360/K2-V2-Instruct
|
| 4 |
language:
|
| 5 |
- en
|
| 6 |
library_name: transformers
|
|
|
|
| 16 |
|
| 17 |
<br>
|
| 18 |
|
| 19 |
+
K2-Think (70B) is a 70 billion parameter open-weights general reasoning model with strong performance in competitive mathematical problem solving built on-top of [K2-V2-Instruct](huggingface.co/LLM360/K2-V2-Instruct), comprising a fully sovereign reasoning system.
|
| 20 |
|
| 21 |
# Quickstart
|
| 22 |
|
| 23 |
### Transformers
|
| 24 |
You can use `K2-Think (70B)` with Transformers. If you use `transformers.pipeline`, it will apply the chat template automatically. If you use `model.generate` directly, you need to apply the chat template mannually.
|
| 25 |
|
| 26 |
+
The chat template is directly inherited from K2-V2-Instruct, with the default `reasoning_effort` set to `"high"`. The other levels of reasoning effort (`"low"` and `"medium"`) are still available but have not been tested or evaluated. As such, the model's behavior under such settings is not assured to maintain reported performance.
|
| 27 |
+
|
| 28 |
```python
|
| 29 |
from transformers import pipeline
|
| 30 |
import torch
|
|
|
|
| 49 |
print(outputs[0]["generated_text"][-1])
|
| 50 |
```
|
| 51 |
|
| 52 |
+
If you cannot use `tokenizer.apply_chat_template`, you may also pass in these arguments using `extra_body` and `chat_template_kwargs`:
|
| 53 |
+
|
| 54 |
+
```
|
| 55 |
+
from openai import OpenAI
|
| 56 |
+
|
| 57 |
+
client = OpenAI(
|
| 58 |
+
base_url="http://localhost:8000/v1",
|
| 59 |
+
api_key="key"
|
| 60 |
+
)
|
| 61 |
+
|
| 62 |
+
completion = client.chat.completions.create(
|
| 63 |
+
model="LLM360/K2-Think-60B",
|
| 64 |
+
messages = [
|
| 65 |
+
{"role": "system", "content": "You are K2-Think, a helpful assistant created by Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) Institute of Foundation Models (IFM)."},
|
| 66 |
+
{"role": "user", "content": "Solve the 24 game [2, 3, 5, 6]"}
|
| 67 |
+
],
|
| 68 |
+
extra_body={
|
| 69 |
+
"chat_template_kwargs": {"reasoning_effort": "high"},
|
| 70 |
+
},
|
| 71 |
+
)
|
| 72 |
+
```
|
| 73 |
+
|
| 74 |
---
|
| 75 |
|
| 76 |
# Evaluation & Performance
|
|
|
|
| 134 |
```bibtex
|
| 135 |
@misc{k2thinkteam2026k2think70B,
|
| 136 |
title={K2-{T}hink 70{B}: A Fully-Sovereign Reasoning System},
|
| 137 |
+
author={K2-Think Team and Taylor W. Killian and Varad Pimpalkhute and Richard Fan and Haonan Li and Chengqian Gao and Ming Shan Hee and Xudong Han and John Maggs and Guowei He and Zhengzhong Liu and Eric P. Xing},
|
| 138 |
year={2026},
|
| 139 |
url={https://tbd.org},
|
| 140 |
}
|