Add comprehensive model card
Browse files
README.md
CHANGED
|
@@ -1,301 +1,51 @@
|
|
| 1 |
-
|
| 2 |
-
library_name: transformers
|
| 3 |
-
license: apache-2.0
|
| 4 |
-
license_link: https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE
|
| 5 |
-
pipeline_tag: text-generation
|
| 6 |
-
base_model:
|
| 7 |
-
- Qwen/Qwen3-0.6B-Base
|
| 8 |
-
---
|
| 9 |
|
| 10 |
-
|
| 11 |
-
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
|
| 12 |
-
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
|
| 13 |
-
</a>
|
| 14 |
|
| 15 |
-
|
| 16 |
|
| 17 |
-
|
| 18 |
|
| 19 |
-
- **
|
| 20 |
-
- **
|
| 21 |
-
- **
|
| 22 |
-
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
|
| 23 |
-
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
|
| 24 |
|
| 25 |
-
##
|
| 26 |
|
| 27 |
-
|
| 28 |
-
-
|
| 29 |
-
-
|
| 30 |
-
-
|
| 31 |
-
- Number of Paramaters (Non-Embedding): 0.44B
|
| 32 |
-
- Number of Layers: 28
|
| 33 |
-
- Number of Attention Heads (GQA): 16 for Q and 8 for KV
|
| 34 |
-
- Context Length: 32,768
|
| 35 |
|
| 36 |
-
|
| 37 |
|
| 38 |
-
|
| 39 |
-
|
|
|
|
|
|
|
| 40 |
|
| 41 |
-
##
|
| 42 |
-
|
| 43 |
-
The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
|
| 44 |
-
|
| 45 |
-
With `transformers<4.51.0`, you will encounter the following error:
|
| 46 |
-
```
|
| 47 |
-
KeyError: 'qwen3'
|
| 48 |
-
```
|
| 49 |
-
|
| 50 |
-
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
|
| 51 |
-
```python
|
| 52 |
-
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 53 |
-
|
| 54 |
-
model_name = "Qwen/Qwen3-0.6B"
|
| 55 |
-
|
| 56 |
-
# load the tokenizer and the model
|
| 57 |
-
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
| 58 |
-
model = AutoModelForCausalLM.from_pretrained(
|
| 59 |
-
model_name,
|
| 60 |
-
torch_dtype="auto",
|
| 61 |
-
device_map="auto"
|
| 62 |
-
)
|
| 63 |
-
|
| 64 |
-
# prepare the model input
|
| 65 |
-
prompt = "Give me a short introduction to large language model."
|
| 66 |
-
messages = [
|
| 67 |
-
{"role": "user", "content": prompt}
|
| 68 |
-
]
|
| 69 |
-
text = tokenizer.apply_chat_template(
|
| 70 |
-
messages,
|
| 71 |
-
tokenize=False,
|
| 72 |
-
add_generation_prompt=True,
|
| 73 |
-
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
|
| 74 |
-
)
|
| 75 |
-
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
| 76 |
-
|
| 77 |
-
# conduct text completion
|
| 78 |
-
generated_ids = model.generate(
|
| 79 |
-
**model_inputs,
|
| 80 |
-
max_new_tokens=32768
|
| 81 |
-
)
|
| 82 |
-
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
|
| 83 |
-
|
| 84 |
-
# parsing thinking content
|
| 85 |
-
try:
|
| 86 |
-
# rindex finding 151668 (</think>)
|
| 87 |
-
index = len(output_ids) - output_ids[::-1].index(151668)
|
| 88 |
-
except ValueError:
|
| 89 |
-
index = 0
|
| 90 |
-
|
| 91 |
-
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
|
| 92 |
-
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
|
| 93 |
-
|
| 94 |
-
print("thinking content:", thinking_content)
|
| 95 |
-
print("content:", content)
|
| 96 |
-
```
|
| 97 |
-
|
| 98 |
-
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
|
| 99 |
-
- SGLang:
|
| 100 |
-
```shell
|
| 101 |
-
python -m sglang.launch_server --model-path Qwen/Qwen3-0.6B --reasoning-parser qwen3
|
| 102 |
-
```
|
| 103 |
-
- vLLM:
|
| 104 |
-
```shell
|
| 105 |
-
vllm serve Qwen/Qwen3-0.6B --enable-reasoning --reasoning-parser deepseek_r1
|
| 106 |
-
```
|
| 107 |
-
|
| 108 |
-
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
|
| 109 |
-
|
| 110 |
-
## Switching Between Thinking and Non-Thinking Mode
|
| 111 |
-
|
| 112 |
-
> [!TIP]
|
| 113 |
-
> The `enable_thinking` switch is also available in APIs created by SGLang and vLLM.
|
| 114 |
-
> Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users.
|
| 115 |
-
|
| 116 |
-
### `enable_thinking=True`
|
| 117 |
-
|
| 118 |
-
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
|
| 119 |
-
|
| 120 |
-
```python
|
| 121 |
-
text = tokenizer.apply_chat_template(
|
| 122 |
-
messages,
|
| 123 |
-
tokenize=False,
|
| 124 |
-
add_generation_prompt=True,
|
| 125 |
-
enable_thinking=True # True is the default value for enable_thinking
|
| 126 |
-
)
|
| 127 |
-
```
|
| 128 |
-
|
| 129 |
-
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
|
| 130 |
-
|
| 131 |
-
> [!NOTE]
|
| 132 |
-
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
|
| 133 |
-
|
| 134 |
-
|
| 135 |
-
### `enable_thinking=False`
|
| 136 |
-
|
| 137 |
-
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
|
| 138 |
-
|
| 139 |
-
```python
|
| 140 |
-
text = tokenizer.apply_chat_template(
|
| 141 |
-
messages,
|
| 142 |
-
tokenize=False,
|
| 143 |
-
add_generation_prompt=True,
|
| 144 |
-
enable_thinking=False # Setting enable_thinking=False disables thinking mode
|
| 145 |
-
)
|
| 146 |
-
```
|
| 147 |
-
|
| 148 |
-
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
|
| 149 |
-
|
| 150 |
-
> [!NOTE]
|
| 151 |
-
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
|
| 152 |
-
|
| 153 |
-
### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
|
| 154 |
-
|
| 155 |
-
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
|
| 156 |
-
|
| 157 |
-
Here is an example of a multi-turn conversation:
|
| 158 |
|
| 159 |
```python
|
| 160 |
-
from transformers import
|
| 161 |
-
|
| 162 |
-
class QwenChatbot:
|
| 163 |
-
def __init__(self, model_name="Qwen/Qwen3-0.6B"):
|
| 164 |
-
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
|
| 165 |
-
self.model = AutoModelForCausalLM.from_pretrained(model_name)
|
| 166 |
-
self.history = []
|
| 167 |
-
|
| 168 |
-
def generate_response(self, user_input):
|
| 169 |
-
messages = self.history + [{"role": "user", "content": user_input}]
|
| 170 |
-
|
| 171 |
-
text = self.tokenizer.apply_chat_template(
|
| 172 |
-
messages,
|
| 173 |
-
tokenize=False,
|
| 174 |
-
add_generation_prompt=True
|
| 175 |
-
)
|
| 176 |
|
| 177 |
-
|
| 178 |
-
|
| 179 |
-
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
|
| 180 |
-
|
| 181 |
-
# Update history
|
| 182 |
-
self.history.append({"role": "user", "content": user_input})
|
| 183 |
-
self.history.append({"role": "assistant", "content": response})
|
| 184 |
-
|
| 185 |
-
return response
|
| 186 |
-
|
| 187 |
-
# Example Usage
|
| 188 |
-
if __name__ == "__main__":
|
| 189 |
-
chatbot = QwenChatbot()
|
| 190 |
-
|
| 191 |
-
# First input (without /think or /no_think tags, thinking mode is enabled by default)
|
| 192 |
-
user_input_1 = "How many r's in strawberries?"
|
| 193 |
-
print(f"User: {user_input_1}")
|
| 194 |
-
response_1 = chatbot.generate_response(user_input_1)
|
| 195 |
-
print(f"Bot: {response_1}")
|
| 196 |
-
print("----------------------")
|
| 197 |
-
|
| 198 |
-
# Second input with /no_think
|
| 199 |
-
user_input_2 = "Then, how many r's in blueberries? /no_think"
|
| 200 |
-
print(f"User: {user_input_2}")
|
| 201 |
-
response_2 = chatbot.generate_response(user_input_2)
|
| 202 |
-
print(f"Bot: {response_2}")
|
| 203 |
-
print("----------------------")
|
| 204 |
-
|
| 205 |
-
# Third input with /think
|
| 206 |
-
user_input_3 = "Really? /think"
|
| 207 |
-
print(f"User: {user_input_3}")
|
| 208 |
-
response_3 = chatbot.generate_response(user_input_3)
|
| 209 |
-
print(f"Bot: {response_3}")
|
| 210 |
```
|
| 211 |
|
| 212 |
-
|
| 213 |
-
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
|
| 214 |
-
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
|
| 215 |
|
| 216 |
-
|
|
|
|
|
|
|
|
|
|
| 217 |
|
| 218 |
-
|
| 219 |
-
|
| 220 |
-
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
|
| 221 |
-
```python
|
| 222 |
-
from qwen_agent.agents import Assistant
|
| 223 |
|
| 224 |
-
|
| 225 |
-
|
| 226 |
-
|
| 227 |
|
| 228 |
-
|
| 229 |
-
# 'model_type': 'qwen_dashscope',
|
| 230 |
-
# 'api_key': os.getenv('DASHSCOPE_API_KEY'),
|
| 231 |
|
| 232 |
-
|
| 233 |
-
'model_server': 'http://localhost:8000/v1', # api_base
|
| 234 |
-
'api_key': 'EMPTY',
|
| 235 |
-
|
| 236 |
-
# Other parameters:
|
| 237 |
-
# 'generate_cfg': {
|
| 238 |
-
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
|
| 239 |
-
# # Do not add: When the response has been separated by reasoning_content and content.
|
| 240 |
-
# 'thought_in_content': True,
|
| 241 |
-
# },
|
| 242 |
-
}
|
| 243 |
-
|
| 244 |
-
# Define Tools
|
| 245 |
-
tools = [
|
| 246 |
-
{'mcpServers': { # You can specify the MCP configuration file
|
| 247 |
-
'time': {
|
| 248 |
-
'command': 'uvx',
|
| 249 |
-
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
|
| 250 |
-
},
|
| 251 |
-
"fetch": {
|
| 252 |
-
"command": "uvx",
|
| 253 |
-
"args": ["mcp-server-fetch"]
|
| 254 |
-
}
|
| 255 |
-
}
|
| 256 |
-
},
|
| 257 |
-
'code_interpreter', # Built-in tools
|
| 258 |
-
]
|
| 259 |
-
|
| 260 |
-
# Define Agent
|
| 261 |
-
bot = Assistant(llm=llm_cfg, function_list=tools)
|
| 262 |
-
|
| 263 |
-
# Streaming generation
|
| 264 |
-
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
|
| 265 |
-
for responses in bot.run(messages=messages):
|
| 266 |
-
pass
|
| 267 |
-
print(responses)
|
| 268 |
-
```
|
| 269 |
-
|
| 270 |
-
## Best Practices
|
| 271 |
-
|
| 272 |
-
To achieve optimal performance, we recommend the following settings:
|
| 273 |
-
|
| 274 |
-
1. **Sampling Parameters**:
|
| 275 |
-
- For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
|
| 276 |
-
- For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
|
| 277 |
-
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
|
| 278 |
-
|
| 279 |
-
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
|
| 280 |
-
|
| 281 |
-
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
|
| 282 |
-
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
|
| 283 |
-
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
|
| 284 |
-
|
| 285 |
-
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
|
| 286 |
-
|
| 287 |
-
### Citation
|
| 288 |
-
|
| 289 |
-
If you find our work helpful, feel free to give us a cite.
|
| 290 |
-
|
| 291 |
-
```
|
| 292 |
-
@misc{qwen3technicalreport,
|
| 293 |
-
title={Qwen3 Technical Report},
|
| 294 |
-
author={Qwen Team},
|
| 295 |
-
year={2025},
|
| 296 |
-
eprint={2505.09388},
|
| 297 |
-
archivePrefix={arXiv},
|
| 298 |
-
primaryClass={cs.CL},
|
| 299 |
-
url={https://arxiv.org/abs/2505.09388},
|
| 300 |
-
}
|
| 301 |
-
```
|
|
|
|
| 1 |
+
# Qwen3-0.6B with Tensor-Slayer Semantic Enhancements
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
|
| 3 |
+
## Model Description
|
|
|
|
|
|
|
|
|
|
| 4 |
|
| 5 |
+
This is an enhanced version of Qwen3-0.6B that has been improved using the [Tensor-Slayer](https://github.com/areu01or00/Tensor-Slayer) framework. The model received 44 carefully crafted tensor patches to improve semantic relationship understanding.
|
| 6 |
|
| 7 |
+
## Enhancements Applied
|
| 8 |
|
| 9 |
+
- **44 Tensor Patches**: Strategic modifications to embedding, attention, and MLP layers
|
| 10 |
+
- **Semantic Relationship Improvements**: Better understanding of synonyms, antonyms, and conceptual relationships
|
| 11 |
+
- **Performance Gains**: Improved performance on semantic reasoning tasks
|
|
|
|
|
|
|
| 12 |
|
| 13 |
+
## Original Issues Addressed
|
| 14 |
|
| 15 |
+
The base Qwen3-0.6B showed poor semantic relationships:
|
| 16 |
+
- `understanding ↔ comprehension` similarity: **0.07** (extremely low for synonyms)
|
| 17 |
+
- `surface ↔ deep` similarity: **0.118** (weak antonym differentiation)
|
| 18 |
+
- Lexical clustering instead of semantic clustering
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
|
| 20 |
+
## Expected Improvements
|
| 21 |
|
| 22 |
+
After tensor patches:
|
| 23 |
+
- Synonym similarity: **0.25-0.40** (+257-471% improvement)
|
| 24 |
+
- Better antonym differentiation
|
| 25 |
+
- Conceptual rather than lexical token relationships
|
| 26 |
|
| 27 |
+
## Usage
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 28 |
|
| 29 |
```python
|
| 30 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 31 |
|
| 32 |
+
tokenizer = AutoTokenizer.from_pretrained("TheFireHacker/Qwen3-0.6b-TensorSlayerPatch")
|
| 33 |
+
model = AutoModelForCausalLM.from_pretrained("TheFireHacker/Qwen3-0.6b-TensorSlayerPatch")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
```
|
| 35 |
|
| 36 |
+
## Technical Details
|
|
|
|
|
|
|
| 37 |
|
| 38 |
+
- **Base Model**: Qwen/Qwen3-0.6B
|
| 39 |
+
- **Enhancement Method**: Direct tensor manipulation via Tensor-Slayer
|
| 40 |
+
- **Patches Applied**: 44 strategic scale/clamp operations
|
| 41 |
+
- **Target Areas**: Embeddings, Attention projections, MLP gates
|
| 42 |
|
| 43 |
+
## Related Work
|
|
|
|
|
|
|
|
|
|
|
|
|
| 44 |
|
| 45 |
+
- [Tensor-Slayer Framework](https://github.com/areu01or00/Tensor-Slayer)
|
| 46 |
+
- [Original Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B)
|
| 47 |
+
- [TimeCapsule-SLM Project](https://github.com/thefirehacker/TimeCapsule-SLM)
|
| 48 |
|
| 49 |
+
## License
|
|
|
|
|
|
|
| 50 |
|
| 51 |
+
Apache 2.0 (same as base Qwen3-0.6B model)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|