Update README.md
Browse filesInitial README, following previous K2-Think.
README.md
CHANGED
|
@@ -1,3 +1,117 @@
|
|
| 1 |
-
---
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
base_model:
|
| 3 |
+
- LLM360/K2-V2
|
| 4 |
+
language:
|
| 5 |
+
- en
|
| 6 |
+
library_name: transformers
|
| 7 |
+
license: apache-2.0
|
| 8 |
+
pipeline_tag: text-generation
|
| 9 |
+
---
|
| 10 |
+
|
| 11 |
+
# K2-Think (70B): A Fully-Sovereign Reasoning System
|
| 12 |
+
|
| 13 |
+
📚 [Paper]() - 📝 [Code](https://github.com/LLM360/Reasoning360) - 🏢 [Project Page](https://k2think.ai)
|
| 14 |
+
|
| 15 |
+
<center><img src="banner.png" alt="k2-think-banner"/></center>
|
| 16 |
+
|
| 17 |
+
<br>
|
| 18 |
+
|
| 19 |
+
K2-Think (70B) is a 70 billion parameter open-weights general reasoning model with strong performance in competitive mathematical problem solving that is built on-top of K2-V2, comprising a fully sovereign reasoning system.
|
| 20 |
+
|
| 21 |
+
# Quickstart
|
| 22 |
+
|
| 23 |
+
### Transformers
|
| 24 |
+
You can use `K2-Think (70B)` with Transformers. If you use `transformers.pipeline`, it will apply the chat template automatically. If you use `model.generate` directly, you need to apply the chat template mannually.
|
| 25 |
+
|
| 26 |
+
```python
|
| 27 |
+
from transformers import pipeline
|
| 28 |
+
import torch
|
| 29 |
+
|
| 30 |
+
model_id = "LLM360/K2-Think-70B"
|
| 31 |
+
|
| 32 |
+
pipe = pipeline(
|
| 33 |
+
"text-generation",
|
| 34 |
+
model=model_id,
|
| 35 |
+
torch_dtype="auto",
|
| 36 |
+
device_map="auto",
|
| 37 |
+
)
|
| 38 |
+
|
| 39 |
+
messages = [
|
| 40 |
+
{"role": "user", "content": "what is the next prime number after 2600?"},
|
| 41 |
+
]
|
| 42 |
+
|
| 43 |
+
outputs = pipe(
|
| 44 |
+
messages,
|
| 45 |
+
max_new_tokens=32768,
|
| 46 |
+
)
|
| 47 |
+
print(outputs[0]["generated_text"][-1])
|
| 48 |
+
```
|
| 49 |
+
|
| 50 |
+
---
|
| 51 |
+
|
| 52 |
+
# Evaluation & Performance
|
| 53 |
+
A summary of evaluation results are reported in our [Blog]()
|
| 54 |
+
|
| 55 |
+
## Benchmarks (pass\@1, average over 16 runs)
|
| 56 |
+
|
| 57 |
+
| Domain | Benchmark | K2-Think (70B) |
|
| 58 |
+
| ------- | -------------------- | -----------: |
|
| 59 |
+
| Math | AIME 2025 | 90.42 |
|
| 60 |
+
| Math | HMMT 2025 | 84.79 |
|
| 61 |
+
| Code | LiveCodeBench v5 | TBD |
|
| 62 |
+
| Science | GPQA-Diamond | 72.98 |
|
| 63 |
+
| Science | Humanity's Last Exam | TBD |
|
| 64 |
+
|
| 65 |
+
<!-- --- -->
|
| 66 |
+
|
| 67 |
+
<!-- ## Inference Speed
|
| 68 |
+
|
| 69 |
+
We deploy K2-THINK (70B) on Cerebras Wafer-Scale Engine (WSE) systems, leveraging the world’s largest processor and speculative decoding to achieve unprecedented inference speeds for our 32B reasoning system.
|
| 70 |
+
|
| 71 |
+
| Platform | Throughput (tokens/sec) | Example: 32k-token response (time) |
|
| 72 |
+
| --------------------------------- | ----------------------: | ---------------------------------: |
|
| 73 |
+
| **Cerebras WSE (our deployment)** | **\~2,000** | **\~16 s** |
|
| 74 |
+
| Typical Cloud Service setup | \~200 | \~160 s | -->
|
| 75 |
+
|
| 76 |
+
<!-- --- -->
|
| 77 |
+
|
| 78 |
+
<!-- ## Safety Evaluation
|
| 79 |
+
|
| 80 |
+
Aggregated across four safety dimensions (**Safety-4**):
|
| 81 |
+
|
| 82 |
+
| Aspect | Macro-Avg |
|
| 83 |
+
| ------------------------------- | --------: |
|
| 84 |
+
| High-Risk Content Refusal | 0.83 |
|
| 85 |
+
| Conversational Robustness | 0.89 |
|
| 86 |
+
| Cybersecurity & Data Protection | 0.56 |
|
| 87 |
+
| Jailbreak Resistance | 0.72 |
|
| 88 |
+
| **Safety-4 Macro (avg)** | **0.75** |
|
| 89 |
+
|
| 90 |
+
--- -->
|
| 91 |
+
|
| 92 |
+
# Terms of Use
|
| 93 |
+
|
| 94 |
+
We have employed various techniques to reduce bias, harmful outputs, and other risks in the model. While these efforts help improve safety and reliability, the model, like all Large Language Models, may still generate inaccurate, misleading, biased, or otherwise undesirable content. By downloading, using, or interacting with this model, you acknowledge these limitations and agree to the following:
|
| 95 |
+
|
| 96 |
+
1. **Prohibited Uses**
|
| 97 |
+
- You may **not** use this model for any **illegal, unlawful, or harmful activities**, including but not limited to fraud, abuse, harassment, privacy violations, or the creation/dissemination of malicious content.
|
| 98 |
+
|
| 99 |
+
2. **User Responsibility**
|
| 100 |
+
- You are solely responsible for how you use the model and for any outcomes that result from its use.
|
| 101 |
+
- The authors and institutions involved in releasing this model do **not** accept liability for any consequences arising from its use.
|
| 102 |
+
|
| 103 |
+
3. **No Warranty**
|
| 104 |
+
- The model is provided **“as is” without any warranties or guarantees**.
|
| 105 |
+
---
|
| 106 |
+
|
| 107 |
+
# Citation
|
| 108 |
+
If you use K2-Think (70B) in your research, please use the following citation:
|
| 109 |
+
|
| 110 |
+
```bibtex
|
| 111 |
+
@misc{k2thinkteam2026k2think70B,
|
| 112 |
+
title={K2-{T}hink 70{B}: A Fully-Sovereign Reasoning System},
|
| 113 |
+
author={K2-Think Team and Taylor W. Killian and Varad Pimpalkhute and Richard Fan and Haonan Li and Chengqian Gao and Ming Shan Hee and John Maggs and Guowei He and Zhengzhong Liu and Eric P. Xing},
|
| 114 |
+
year={2026},
|
| 115 |
+
url={https://tbd.org},
|
| 116 |
+
}
|
| 117 |
+
```
|