Update README.md (#22)
Browse files- Update README.md (aad3b10348563cc47f2799967aa3b5886a236eb0)
README.md
CHANGED
|
@@ -38,7 +38,7 @@ base_model:
|
|
| 38 |
| Youtu-LLM-2B-GGUF | Instruct model of Youtu-LLM-2B, in GGUF format | 🤗 [Model](https://huggingface.co/tencent/Youtu-LLM-2B-GGUF)|
|
| 39 |
|
| 40 |
## 📰 News
|
| 41 |
-
- [2026.01.28] You can now directly use Youtu-LLM with [Transformers](https://github.com/huggingface/transformers/
|
| 42 |
- [2026.01.07] You can now fine-tune Youtu-LLM with [ModelScope](https://mp.weixin.qq.com/s/JJtQWSYEjnE7GnPkaJ7UNA).
|
| 43 |
- [2026.01.04] You can now fine-tune Youtu-LLM with [LlamaFactory](https://github.com/hiyouga/LlamaFactory/pull/9707).
|
| 44 |
|
|
@@ -91,7 +91,7 @@ base_model:
|
|
| 91 |
This guide will help you quickly deploy and invoke the **Youtu-LLM-2B** model. This model supports "Reasoning Mode", enabling it to generate higher-quality responses through Chain of Thought (CoT).
|
| 92 |
|
| 93 |
<details>
|
| 94 |
-
<summary>Transformers
|
| 95 |
|
| 96 |
If you wish to use Youtu-LLM-2B based on earlier versions of transformers, please make sure to download the model repository before this [commit](https://huggingface.co/tencent/Youtu-LLM-2B/commit/5690998a0a4cae7a7ec970d09262745e00bb6c5c).
|
| 97 |
|
|
@@ -177,7 +177,7 @@ print(f"\n{'='*20} Final Answer {'='*20}\n{final_answer}")
|
|
| 177 |
</details>
|
| 178 |
|
| 179 |
<details>
|
| 180 |
-
<summary>Transformers
|
| 181 |
|
| 182 |
### 1. Environment Preparation
|
| 183 |
Ensure your Python environment has the `transformers` library installed and that the version meets the requirements.
|
|
|
|
| 38 |
| Youtu-LLM-2B-GGUF | Instruct model of Youtu-LLM-2B, in GGUF format | 🤗 [Model](https://huggingface.co/tencent/Youtu-LLM-2B-GGUF)|
|
| 39 |
|
| 40 |
## 📰 News
|
| 41 |
+
- [2026.01.28] You can now directly use Youtu-LLM with [Transformers>=5.1.0](https://github.com/huggingface/transformers/releases/tag/v5.1.0).
|
| 42 |
- [2026.01.07] You can now fine-tune Youtu-LLM with [ModelScope](https://mp.weixin.qq.com/s/JJtQWSYEjnE7GnPkaJ7UNA).
|
| 43 |
- [2026.01.04] You can now fine-tune Youtu-LLM with [LlamaFactory](https://github.com/hiyouga/LlamaFactory/pull/9707).
|
| 44 |
|
|
|
|
| 91 |
This guide will help you quickly deploy and invoke the **Youtu-LLM-2B** model. This model supports "Reasoning Mode", enabling it to generate higher-quality responses through Chain of Thought (CoT).
|
| 92 |
|
| 93 |
<details>
|
| 94 |
+
<summary>Transformers >= 4.56.0, <= 4.57.1</summary>
|
| 95 |
|
| 96 |
If you wish to use Youtu-LLM-2B based on earlier versions of transformers, please make sure to download the model repository before this [commit](https://huggingface.co/tencent/Youtu-LLM-2B/commit/5690998a0a4cae7a7ec970d09262745e00bb6c5c).
|
| 97 |
|
|
|
|
| 177 |
</details>
|
| 178 |
|
| 179 |
<details>
|
| 180 |
+
<summary>Transformers >= 5.1.0</summary>
|
| 181 |
|
| 182 |
### 1. Environment Preparation
|
| 183 |
Ensure your Python environment has the `transformers` library installed and that the version meets the requirements.
|