kl commited on
Update README.md
Browse files
README.md
CHANGED
|
@@ -9,7 +9,7 @@ base_model:
|
|
| 9 |
|
| 10 |
[English](README.md) | [中文](README_CN.md)
|
| 11 |
|
| 12 |
-
##
|
| 13 |
|
| 14 |
We introduce **MindLink**, a new family of large language models developed by **Kunlun Inc**. Built on **Qwen**, these models incorporate our latest advances in post-training techniques. MindLink demonstrates strong performance across various common benchmarks and is widely applicable in diverse AI scenarios. We welcome feedback to help us continuously optimize and improve our models.
|
| 15 |
|
|
@@ -31,7 +31,7 @@ Our training methodology and evaluation: [MindLink](https://github.com/SkyworkAI
|
|
| 31 |
|
| 32 |
---
|
| 33 |
|
| 34 |
-
##
|
| 35 |
|
| 36 |
* **Plan-based Reasoning**: Without the "think" tag, MindLink achieves competitive performance with leading proprietary models across a wide range of reasoning and general tasks. It significantly reduces inference cost, and improves multi-turn capabilities.
|
| 37 |
* **Mathematical Framework**: It analyzes the effectiveness of both **Chain-of-Thought (CoT)** and **Plan-based Reasoning**.
|
|
@@ -39,7 +39,55 @@ Our training methodology and evaluation: [MindLink](https://github.com/SkyworkAI
|
|
| 39 |
|
| 40 |
---
|
| 41 |
|
| 42 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 43 |
|
| 44 |
📢 We provide developers with a **one-month free trial** of our API for exploring and testing our models. To request access to an **Open WebUI account** (https://sd1svahsfo0m61h76e190.apigateway-cn-beijing.volceapi.com), please contact us at: **[mindlink@skywork.ai](mailto:mindlink@skywork.ai)**
|
| 45 |
|
|
@@ -113,7 +161,7 @@ else:
|
|
| 113 |
|
| 114 |
---
|
| 115 |
|
| 116 |
-
##
|
| 117 |
|
| 118 |
The results are shown below:
|
| 119 |

|
|
|
|
| 9 |
|
| 10 |
[English](README.md) | [中文](README_CN.md)
|
| 11 |
|
| 12 |
+
## Model Description
|
| 13 |
|
| 14 |
We introduce **MindLink**, a new family of large language models developed by **Kunlun Inc**. Built on **Qwen**, these models incorporate our latest advances in post-training techniques. MindLink demonstrates strong performance across various common benchmarks and is widely applicable in diverse AI scenarios. We welcome feedback to help us continuously optimize and improve our models.
|
| 15 |
|
|
|
|
| 31 |
|
| 32 |
---
|
| 33 |
|
| 34 |
+
## Highlights
|
| 35 |
|
| 36 |
* **Plan-based Reasoning**: Without the "think" tag, MindLink achieves competitive performance with leading proprietary models across a wide range of reasoning and general tasks. It significantly reduces inference cost, and improves multi-turn capabilities.
|
| 37 |
* **Mathematical Framework**: It analyzes the effectiveness of both **Chain-of-Thought (CoT)** and **Plan-based Reasoning**.
|
|
|
|
| 39 |
|
| 40 |
---
|
| 41 |
|
| 42 |
+
## Quickstart
|
| 43 |
+
|
| 44 |
+
Here provides a code snippet with apply_chat_template to show you how to load the tokenizer and model and how to generate contents.
|
| 45 |
+
|
| 46 |
+
> ⚠️ Please make sure you have installed `transformers>=4.51.0`. Lower versions are not supported.
|
| 47 |
+
|
| 48 |
+
```python
|
| 49 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 50 |
+
|
| 51 |
+
model_name = "Skywork/MindLink-32B-0801"
|
| 52 |
+
|
| 53 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 54 |
+
model_name,
|
| 55 |
+
torch_dtype="auto",
|
| 56 |
+
device_map="auto"
|
| 57 |
+
)
|
| 58 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
| 59 |
+
|
| 60 |
+
prompt = "What is the capital of China?"
|
| 61 |
+
messages = [
|
| 62 |
+
{"role": "user", "content": prompt}
|
| 63 |
+
]
|
| 64 |
+
text = tokenizer.apply_chat_template(
|
| 65 |
+
messages,
|
| 66 |
+
tokenize=False,
|
| 67 |
+
add_generation_prompt=True
|
| 68 |
+
)
|
| 69 |
+
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
| 70 |
+
|
| 71 |
+
generated_ids = model.generate(
|
| 72 |
+
**model_inputs,
|
| 73 |
+
max_new_tokens=512
|
| 74 |
+
)
|
| 75 |
+
generated_ids = [
|
| 76 |
+
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
|
| 77 |
+
]
|
| 78 |
+
|
| 79 |
+
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
| 80 |
+
```
|
| 81 |
+
|
| 82 |
+
For deployment, you can use sglang>=0.4.6.post1 to create an OpenAI-compatible API endpoint:
|
| 83 |
+
- SGLang:
|
| 84 |
+
```shell
|
| 85 |
+
python -m sglang.launch_server --model-path Skywork/MindLink-32B-0801
|
| 86 |
+
```
|
| 87 |
+
|
| 88 |
+
---
|
| 89 |
+
|
| 90 |
+
## API Access
|
| 91 |
|
| 92 |
📢 We provide developers with a **one-month free trial** of our API for exploring and testing our models. To request access to an **Open WebUI account** (https://sd1svahsfo0m61h76e190.apigateway-cn-beijing.volceapi.com), please contact us at: **[mindlink@skywork.ai](mailto:mindlink@skywork.ai)**
|
| 93 |
|
|
|
|
| 161 |
|
| 162 |
---
|
| 163 |
|
| 164 |
+
## Evaluation
|
| 165 |
|
| 166 |
The results are shown below:
|
| 167 |

|