Update README.md

#9
by Junrulu - opened
Files changed (1) hide show
  1. README.md +8 -9
README.md CHANGED
@@ -1,6 +1,7 @@
1
  ---
2
  library_name: transformers
3
  license: other
 
4
  license_link: https://huggingface.co/tencent/Youtu-LLM-2B/LICENSE.txt
5
  pipeline_tag: text-generation
6
  base_model:
@@ -29,6 +30,13 @@ base_model:
29
  - Context Length: 131,072
30
  - Vocabulary Size: 128,256
31
 
 
 
 
 
 
 
 
32
  <a id="benchmarks"></a>
33
 
34
  ## πŸ“Š Performance Comparisons
@@ -77,8 +85,6 @@ base_model:
77
  ## πŸš€ Quick Start
78
  This guide will help you quickly deploy and invoke the **Youtu-LLM-2B** model. This model supports "Reasoning Mode", enabling it to generate higher-quality responses through Chain of Thought (CoT).
79
 
80
- ---
81
-
82
  ### 1. Environment Preparation
83
 
84
  Ensure your Python environment has the `transformers` library installed and that the version meets the requirements.
@@ -88,8 +94,6 @@ pip install "transformers>=4.56" torch accelerate
88
 
89
  ```
90
 
91
- ---
92
-
93
  ### 2. Core Code Example
94
 
95
  The following example demonstrates how to load the model, enable Reasoning Mode, and use the `re` module to parse the "Thought Process" and the "Final Answer" from the output.
@@ -156,8 +160,6 @@ print(f"\n{'='*20} Final Answer {'='*20}\n{final_answer}")
156
 
157
  ```
158
 
159
- ---
160
-
161
  ### 3. Key Configuration Details
162
 
163
  #### Reasoning Mode Toggle
@@ -181,8 +183,6 @@ Depending on your use case, we suggest adjusting the following hyperparameters f
181
 
182
  > **Tip:** When using Reasoning Mode, a higher `temperature` helps the model perform deeper, more divergent thinking.
183
 
184
- ---
185
-
186
  ### 4. vLLM Deployment
187
 
188
  We provide support for deploying the model using **vLLM 0.10.2**. The recommended Docker image is `vllm/vllm-openai:v0.10.2`.
@@ -211,7 +211,6 @@ To enable tool calling capabilities, please append the following arguments to th
211
  ```bash
212
  --enable-auto-tool-choice --tool-call-parser hermes
213
  ```
214
- ---
215
 
216
  <a id="highlights"></a>
217
 
 
1
  ---
2
  library_name: transformers
3
  license: other
4
+ license_name: youtu-llm
5
  license_link: https://huggingface.co/tencent/Youtu-LLM-2B/LICENSE.txt
6
  pipeline_tag: text-generation
7
  base_model:
 
30
  - Context Length: 131,072
31
  - Vocabulary Size: 128,256
32
 
33
+ ## πŸ€— Model Download
34
+ | Model Name | Description | Download |
35
+ | ----------- | ----------- |-----------
36
+ | Youtu-LLM-2B-Base | Base model of Youtu-LLM-2B |πŸ€— [Model](https://huggingface.co/tencent/Youtu-LLM-2B-Base)|
37
+ | Youtu-LLM-2B | Instruct model of Youtu-LLM-2B | πŸ€— [Model](https://huggingface.co/tencent/Youtu-LLM-2B)|
38
+ | Youtu-LLM-2B-GGUF | Instruct model of Youtu-LLM-2B, in GGUF format | πŸ€— [Model](https://huggingface.co/tencent/Youtu-LLM-2B-GGUF)|
39
+
40
  <a id="benchmarks"></a>
41
 
42
  ## πŸ“Š Performance Comparisons
 
85
  ## πŸš€ Quick Start
86
  This guide will help you quickly deploy and invoke the **Youtu-LLM-2B** model. This model supports "Reasoning Mode", enabling it to generate higher-quality responses through Chain of Thought (CoT).
87
 
 
 
88
  ### 1. Environment Preparation
89
 
90
  Ensure your Python environment has the `transformers` library installed and that the version meets the requirements.
 
94
 
95
  ```
96
 
 
 
97
  ### 2. Core Code Example
98
 
99
  The following example demonstrates how to load the model, enable Reasoning Mode, and use the `re` module to parse the "Thought Process" and the "Final Answer" from the output.
 
160
 
161
  ```
162
 
 
 
163
  ### 3. Key Configuration Details
164
 
165
  #### Reasoning Mode Toggle
 
183
 
184
  > **Tip:** When using Reasoning Mode, a higher `temperature` helps the model perform deeper, more divergent thinking.
185
 
 
 
186
  ### 4. vLLM Deployment
187
 
188
  We provide support for deploying the model using **vLLM 0.10.2**. The recommended Docker image is `vllm/vllm-openai:v0.10.2`.
 
211
  ```bash
212
  --enable-auto-tool-choice --tool-call-parser hermes
213
  ```
 
214
 
215
  <a id="highlights"></a>
216