qqc1989 commited on
Commit
f61ff15
ยท
verified ยท
1 Parent(s): 139dcbd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +161 -219
README.md CHANGED
@@ -1,255 +1,197 @@
1
  ---
 
 
 
 
 
 
 
2
  library_name: transformers
3
- license: bsd-3-clause
 
 
4
  ---
5
 
6
- # DeepSeek-R1-Distill-Qwen-1.5B-AX650&AX630C
7
 
8
- This version of DeepSeek-R1-Distill-Qwen-1.5B has been converted to run on the Axera NPU using w8a16 quantization.
9
 
10
  This model has been optimized with the following LoRA:
11
 
12
- Compatible with Pulsar2 version: 3.3
13
 
14
- ## Useful links:
15
- [Pulsar2 Link, How to Convert LLM from Huggingface to axmodel](https://pulsar2-docs.readthedocs.io/en/latest/appendix/build_llm.html)
16
-
17
- [AXera NPU LLM Runtime](https://github.com/AXERA-TECH/ax-llm)
18
-
19
-
20
- # Original Model Card for base model, DeepSeek-R1-Distill-Qwen-1.5B, below:
21
-
22
- # DeepSeek-R1
23
- <!-- markdownlint-disable first-line-h1 -->
24
- <!-- markdownlint-disable html -->
25
- <!-- markdownlint-disable no-duplicate-header -->
26
-
27
- <div align="center">
28
- <img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
29
- </div>
30
- <hr>
31
- <div align="center" style="line-height: 1;">
32
- <a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
33
- <img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
34
- </a>
35
- <a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
36
- <img alt="Chat" src="https://img.shields.io/badge/๐Ÿค–%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
37
- </a>
38
- <a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
39
- <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
40
- </a>
41
- </div>
42
-
43
- <div align="center" style="line-height: 1;">
44
- <a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
45
- <img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
46
- </a>
47
- <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
48
- <img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
49
- </a>
50
- <a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
51
- <img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
52
- </a>
53
- </div>
54
-
55
- <div align="center" style="line-height: 1;">
56
- <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE-CODE" style="margin: 2px;">
57
- <img alt="Code License" src="https://img.shields.io/badge/Code_License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
58
- </a>
59
- <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE-MODEL" style="margin: 2px;">
60
- <img alt="Model License" src="https://img.shields.io/badge/Model_License-Model_Agreement-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
61
- </a>
62
- </div>
63
-
64
-
65
- <p align="center">
66
- <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf"><b>Paper Link</b>๐Ÿ‘๏ธ</a>
67
- </p>
68
-
69
-
70
- ## 1. Introduction
71
-
72
- We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1.
73
- DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning.
74
- With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors.
75
- However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance,
76
- we introduce DeepSeek-R1, which incorporates cold-start data before RL.
77
- DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.
78
- To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen. DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.
79
-
80
- **NOTE: Before running DeepSeek-R1 series models locally, we kindly recommend reviewing the [Usage Recommendation](#usage-recommendations) section.**
81
-
82
- <p align="center">
83
- <img width="80%" src="figures/benchmark.jpg">
84
- </p>
85
-
86
- ## 2. Model Summary
87
-
88
- ---
89
-
90
- **Post-Training: Large-Scale Reinforcement Learning on the Base Model**
91
-
92
- - We directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero. DeepSeek-R1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area.
93
-
94
- - We introduce our pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model's reasoning and non-reasoning capabilities.
95
- We believe the pipeline will benefit the industry by creating better models.
96
-
97
- ---
98
-
99
- **Distillation: Smaller Models Can Be Powerful Too**
100
-
101
- - We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open source DeepSeek-R1, as well as its API, will benefit the research community to distill better smaller models in the future.
102
- - Using the reasoning data generated by DeepSeek-R1, we fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. We open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community.
103
-
104
- ## 3. Model Downloads
105
-
106
- ### DeepSeek-R1 Models
107
-
108
- <div align="center">
109
-
110
- | **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
111
- | :------------: | :------------: | :------------: | :------------: | :------------: |
112
- | DeepSeek-R1-Zero | 671B | 37B | 128K | [๐Ÿค— HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero) |
113
- | DeepSeek-R1 | 671B | 37B | 128K | [๐Ÿค— HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1) |
114
-
115
- </div>
116
-
117
- DeepSeek-R1-Zero & DeepSeek-R1 are trained based on DeepSeek-V3-Base.
118
- For more details regarding the model architecture, please refer to [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repository.
119
-
120
- ### DeepSeek-R1-Distill Models
121
-
122
- <div align="center">
123
 
124
- | **Model** | **Base Model** | **Download** |
125
- | :------------: | :------------: | :------------: |
126
- | DeepSeek-R1-Distill-Qwen-1.5B | [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) | [๐Ÿค— HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) |
127
- | DeepSeek-R1-Distill-Qwen-7B | [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) | [๐Ÿค— HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) |
128
- | DeepSeek-R1-Distill-Llama-8B | [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | [๐Ÿค— HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) |
129
- | DeepSeek-R1-Distill-Qwen-14B | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [๐Ÿค— HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) |
130
- |DeepSeek-R1-Distill-Qwen-32B | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [๐Ÿค— HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) |
131
- | DeepSeek-R1-Distill-Llama-70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [๐Ÿค— HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) |
132
 
133
- </div>
134
 
135
- DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1.
136
- We slightly change their configs and tokenizers. Please use our setting to run these models.
137
-
138
- ## 4. Evaluation Results
139
-
140
- ### DeepSeek-R1-Evaluation
141
- For all our models, the maximum generation length is set to 32,768 tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 64 responses per query to estimate pass@1.
142
- <div align="center">
143
-
144
-
145
- | Category | Benchmark (Metric) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 |
146
- |----------|-------------------|----------------------|------------|--------------|----------------|------------|--------------|
147
- | | Architecture | - | - | MoE | - | - | MoE |
148
- | | # Activated Params | - | - | 37B | - | - | 37B |
149
- | | # Total Params | - | - | 671B | - | - | 671B |
150
- | English | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | **91.8** | 90.8 |
151
- | | MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | **92.9** |
152
- | | MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | **84.0** |
153
- | | DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | **92.2** |
154
- | | IF-Eval (Prompt Strict) | **86.5** | 84.3 | 86.1 | 84.8 | - | 83.3 |
155
- | | GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | **75.7** | 71.5 |
156
- | | SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | **47.0** | 30.1 |
157
- | | FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | **82.5** |
158
- | | AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | **87.6** |
159
- | | ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | **92.3** |
160
- | Code | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | **65.9** |
161
- | | Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | **96.6** | 96.3 |
162
- | | Codeforces (Rating) | 717 | 759 | 1134 | 1820 | **2061** | 2029 |
163
- | | SWE Verified (Resolved) | **50.8** | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 |
164
- | | Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | **61.7** | 53.3 |
165
- | Math | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | **79.8** |
166
- | | MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | **97.3** |
167
- | | CNMO 2024 (Pass@1) | 13.1 | 10.8 | 43.2 | 67.6 | - | **78.8** |
168
- | Chinese | CLUEWSC (EM) | 85.4 | 87.9 | 90.9 | 89.9 | - | **92.8** |
169
- | | C-Eval (EM) | 76.7 | 76.0 | 86.5 | 68.9 | - | **91.8** |
170
- | | C-SimpleQA (Correct) | 55.4 | 58.7 | **68.0** | 40.3 | - | 63.7 |
171
-
172
- </div>
173
 
 
174
 
175
- ### Distilled Model Evaluation
176
 
 
177
 
178
- <div align="center">
179
 
180
- | Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating |
181
- |------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------|
182
- | GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 |
183
- | Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 |
184
- | o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | **1820** |
185
- | QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 |
186
- | DeepSeek-R1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 16.9 | 954 |
187
- | DeepSeek-R1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 |
188
- | DeepSeek-R1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 |
189
- | DeepSeek-R1-Distill-Qwen-32B | **72.6** | 83.3 | 94.3 | 62.1 | 57.2 | 1691 |
190
- | DeepSeek-R1-Distill-Llama-8B | 50.4 | 80.0 | 89.1 | 49.0 | 39.6 | 1205 |
191
- | DeepSeek-R1-Distill-Llama-70B | 70.0 | **86.7** | **94.5** | **65.2** | **57.5** | 1633 |
192
 
193
- </div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
194
 
 
195
 
196
- ## 5. Chat Website & API Platform
197
- You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com), and switch on the button "DeepThink"
 
 
 
 
 
 
 
 
198
 
199
- We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/)
200
 
201
- ## 6. How to Run Locally
202
 
203
- ### DeepSeek-R1 Models
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
204
 
205
- Please visit [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repo for more information about running DeepSeek-R1 locally.
206
 
207
- ### DeepSeek-R1-Distill Models
 
 
 
208
 
209
- DeepSeek-R1-Distill models can be utilized in the same manner as Qwen or Llama models.
210
 
211
- For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm):
 
 
212
 
213
- ```shell
214
- vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager
215
  ```
216
-
217
- You can also easily start a service using [SGLang](https://github.com/sgl-project/sglang)
218
-
219
- ```bash
220
- python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --trust-remote-code --tp 2
 
 
 
 
 
 
 
221
  ```
222
 
223
- ### Usage Recommendations
224
-
225
- **We recommend adhering to the following configurations when utilizing the DeepSeek-R1 series models, including benchmarking, to achieve the expected performance:**
226
 
227
- 1. Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs.
228
- 2. **Avoid adding a system prompt; all instructions should be contained within the user prompt.**
229
- 3. For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \boxed{}."
230
- 4. When evaluating model performance, it is recommended to conduct multiple tests and average the results.
231
 
232
- ## 7. License
233
- This code repository and the model weights are licensed under the [MIT License](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE).
234
- DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that:
235
- - DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE), and now finetuned with 800k samples curated with DeepSeek-R1.
236
- - DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE).
237
- - DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE).
238
-
239
- ## 8. Citation
240
  ```
241
- @misc{deepseekai2025deepseekr1incentivizingreasoningcapability,
242
- title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning},
243
- author={DeepSeek-AI and Daya Guo and Dejian Yang and Haowei Zhang and Junxiao Song and Ruoyu Zhang and Runxin Xu and Qihao Zhu and Shirong Ma and Peiyi Wang and Xiao Bi and Xiaokang Zhang and Xingkai Yu and Yu Wu and Z. F. Wu and Zhibin Gou and Zhihong Shao and Zhuoshu Li and Ziyi Gao and Aixin Liu and Bing Xue and Bingxuan Wang and Bochao Wu and Bei Feng and Chengda Lu and Chenggang Zhao and Chengqi Deng and Chenyu Zhang and Chong Ruan and Damai Dai and Deli Chen and Dongjie Ji and Erhang Li and Fangyun Lin and Fucong Dai and Fuli Luo and Guangbo Hao and Guanting Chen and Guowei Li and H. Zhang and Han Bao and Hanwei Xu and Haocheng Wang and Honghui Ding and Huajian Xin and Huazuo Gao and Hui Qu and Hui Li and Jianzhong Guo and Jiashi Li and Jiawei Wang and Jingchang Chen and Jingyang Yuan and Junjie Qiu and Junlong Li and J. L. Cai and Jiaqi Ni and Jian Liang and Jin Chen and Kai Dong and Kai Hu and Kaige Gao and Kang Guan and Kexin Huang and Kuai Yu and Lean Wang and Lecong Zhang and Liang Zhao and Litong Wang and Liyue Zhang and Lei Xu and Leyi Xia and Mingchuan Zhang and Minghua Zhang and Minghui Tang and Meng Li and Miaojun Wang and Mingming Li and Ning Tian and Panpan Huang and Peng Zhang and Qiancheng Wang and Qinyu Chen and Qiushi Du and Ruiqi Ge and Ruisong Zhang and Ruizhe Pan and Runji Wang and R. J. Chen and R. L. Jin and Ruyi Chen and Shanghao Lu and Shangyan Zhou and Shanhuang Chen and Shengfeng Ye and Shiyu Wang and Shuiping Yu and Shunfeng Zhou and Shuting Pan and S. S. Li and Shuang Zhou and Shaoqing Wu and Shengfeng Ye and Tao Yun and Tian Pei and Tianyu Sun and T. Wang and Wangding Zeng and Wanjia Zhao and Wen Liu and Wenfeng Liang and Wenjun Gao and Wenqin Yu and Wentao Zhang and W. L. Xiao and Wei An and Xiaodong Liu and Xiaohan Wang and Xiaokang Chen and Xiaotao Nie and Xin Cheng and Xin Liu and Xin Xie and Xingchao Liu and Xinyu Yang and Xinyuan Li and Xuecheng Su and Xuheng Lin and X. Q. Li and Xiangyue Jin and Xiaojin Shen and Xiaosha Chen and Xiaowen Sun and Xiaoxiang Wang and Xinnan Song and Xinyi Zhou and Xianzu Wang and Xinxia Shan and Y. K. Li and Y. Q. Wang and Y. X. Wei and Yang Zhang and Yanhong Xu and Yao Li and Yao Zhao and Yaofeng Sun and Yaohui Wang and Yi Yu and Yichao Zhang and Yifan Shi and Yiliang Xiong and Ying He and Yishi Piao and Yisong Wang and Yixuan Tan and Yiyang Ma and Yiyuan Liu and Yongqiang Guo and Yuan Ou and Yuduan Wang and Yue Gong and Yuheng Zou and Yujia He and Yunfan Xiong and Yuxiang Luo and Yuxiang You and Yuxuan Liu and Yuyang Zhou and Y. X. Zhu and Yanhong Xu and Yanping Huang and Yaohui Li and Yi Zheng and Yuchen Zhu and Yunxian Ma and Ying Tang and Yukun Zha and Yuting Yan and Z. Z. Ren and Zehui Ren and Zhangli Sha and Zhe Fu and Zhean Xu and Zhenda Xie and Zhengyan Zhang and Zhewen Hao and Zhicheng Ma and Zhigang Yan and Zhiyu Wu and Zihui Gu and Zijia Zhu and Zijun Liu and Zilin Li and Ziwei Xie and Ziyang Song and Zizheng Pan and Zhen Huang and Zhipeng Xu and Zhongyu Zhang and Zhen Zhang},
244
- year={2025},
245
- eprint={2501.12948},
246
- archivePrefix={arXiv},
247
- primaryClass={cs.CL},
248
- url={https://arxiv.org/abs/2501.12948},
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
249
  }
250
 
251
- ```
252
-
253
- ## 9. Contact
254
- If you have any questions, please raise an issue or contact us at [service@deepseek.com](service@deepseek.com).
255
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: mit
3
+ language:
4
+ - zh
5
+ - en
6
+ base_model:
7
+ - deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
8
+ pipeline_tag: text-generation
9
  library_name: transformers
10
+ tags:
11
+ - Context
12
+ - DeepSeek-R1-Distill-Qwen-1.5B
13
  ---
14
 
15
+ # DeepSeek-R1-Distill-Qwen-1.5B
16
 
17
+ This version of DeepSeek-R1-Distill-Qwen-1.5B has been converted to run on the Axera NPU using **w8a16** and **w4a16** quantization.
18
 
19
  This model has been optimized with the following LoRA:
20
 
21
+ Compatible with Pulsar2 version: 4.1
22
 
23
+ ## Feature
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
+ - Support for longer contexts, in this sample it's 2k
26
+ - Support context dialogue
27
+ - System prompt kvcache is supported
 
 
 
 
 
28
 
29
+ ## Convert tools links:
30
 
31
+ For those who are interested in model conversion, you can try to export axmodel through the original repo : https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
 
33
+ [Pulsar2 Link, How to Convert LLM from Huggingface to axmodel](https://pulsar2-docs.readthedocs.io/en/latest/appendix/build_llm.html)
34
 
35
+ [AXera NPU AXEngine LLM Runtime](https://github.com/AXERA-TECH/ax-llm/tree/ax-context)
36
 
37
+ [AXera NPU AXCL LLM Runtime](https://github.com/AXERA-TECH/ax-llm/tree/axcl-context)
38
 
39
+ ### Convert script
40
 
41
+ The follow show how to convert DeepSeek-R1-Distill-Qwen-1.5B
 
 
 
 
 
 
 
 
 
 
 
42
 
43
+ ```
44
+ pulsar2 llm_build --input_path deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B \
45
+ --output_path deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B-ax650 \
46
+ --hidden_state_type bf16 --kv_cache_len 2047 --prefill_len 128 \
47
+ --last_kv_cache_len 128 \
48
+ --last_kv_cache_len 256 \
49
+ --last_kv_cache_len 384 \
50
+ --last_kv_cache_len 512 \
51
+ --last_kv_cache_len 640 \
52
+ --last_kv_cache_len 768 \
53
+ --last_kv_cache_len 896 \
54
+ --last_kv_cache_len 1024 \
55
+ --last_kv_cache_len 1152 \
56
+ --last_kv_cache_len 1280 \
57
+ --last_kv_cache_len 1408 \
58
+ --last_cache_len 1536 \
59
+ --chip AX650 -c 1 --parallel 8
60
+ ```
61
 
62
+ ## Support Platform
63
 
64
+ - AX650
65
+ - AX650N DEMO Board
66
+ - [M4N-Dock(็ˆฑ่ŠฏๆดพPro)](https://wiki.sipeed.com/hardware/zh/maixIV/m4ndock/m4ndock.html)
67
+ - [M.2 Accelerator card](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html)
68
+ - AX630C
69
+ - *TBD*
70
+
71
+ |Chips|w8a16|w4a16| DDR | Flash |
72
+ |--|--|--|--|--|
73
+ |AX650| 12 tokens/sec| 17 tokens/sec | 2.3GB | 2.3GB |
74
 
75
+ ## How to use
76
 
77
+ Download all files from this repository to the device
78
 
79
+ ```
80
+ root@ax650:/mnt/qtang/llm-test/deepseek-r1-1.5b-ctx# tree -L 1
81
+ .
82
+ |-- config.json
83
+ |-- deepseek-r1-1.5b-ctx-ax650
84
+ |-- deepseek-r1_tokenizer
85
+ |-- deepseek-r1_tokenizer_uid.py
86
+ |-- main_ax650
87
+ |-- main_axcl_aarch64
88
+ |-- main_axcl_x86
89
+ |-- post_config.json
90
+ |-- run_deepseek-r1_1.5b_ctx_ax650.sh
91
+ |-- run_deepseek-r1_1.5b_ctx_axcl_aarch64.sh
92
+ `-- run_deepseek-r1_1.5b_ctx_axcl_x86.sh
93
+
94
+ 2 directories, 9 files
95
+ ```
96
 
97
+ #### Start the Tokenizer service
98
 
99
+ ```
100
+ root@ax650:/mnt/qtang/llm-test/deepseek-r1-1.5b-ctx# python3 deepseek-r1_tokenizer_uid.py
101
+ Server running at http://0.0.0.0:12345
102
+ ```
103
 
104
+ #### System prompt cache
105
 
106
+ - The System prompt can be preset through the configuration file from `--system_prompt`
107
+ - The System prompt can be cached in the form of kv cache to a specified folder for quick loading at the next run time from `--kvcache_path`
108
+ - This folder needs to be created manually before running, for example `mkdir kvcache`
109
 
 
 
110
  ```
111
+ root@ax650:/mnt/qtang/llm-test/deepseek-r1-1.5b-ctx# cat run_deepseek-r1_1.5b_ctx_ax650.sh
112
+ ./main_ax650 \
113
+ --template_filename_axmodel "deepseek-r1-1.5b-ctx-ax650/qwen2_p128_l%d_together.axmodel" \
114
+ --axmodel_num 28 \
115
+ --tokenizer_type 2 \
116
+ --url_tokenizer_model "http://0.0.0.0:12345" \
117
+ --filename_post_axmodel "deepseek-r1-1.5b-ctx-ax650/qwen2_post.axmodel" \
118
+ --filename_tokens_embed "deepseek-r1-1.5b-ctx-ax650/model.embed_tokens.weight.bfloat16.bin" \
119
+ --tokens_embed_num 151936 \
120
+ --tokens_embed_size 1536 \
121
+ --use_mmap_load_embed 1 \
122
+ --live_print 1
123
  ```
124
 
125
+ #### Inference with AX650 Host, such as M4N-Dock(็ˆฑ่ŠฏๆดพPro) or AX650N DEMO Board
 
 
126
 
127
+ Open another terminal and run `run_deepseek-r1_1.5b_ctx_ax650.sh`
 
 
 
128
 
 
 
 
 
 
 
 
 
129
  ```
130
+ root@ax650:/mnt/qtang/llm-test/deepseek-r1-1.5b-ctx# ./run_deepseek-r1_1.5b_ctx_ax650.sh
131
+ [I][ Init][ 110]: LLM init start
132
+ [I][ Init][ 34]: connect http://0.0.0.0:12345 ok
133
+ [I][ Init][ 57]: uid: 7fedc3e5-e824-4915-935a-c0de5a341928
134
+ bos_id: 151646, eos_id: 151643
135
+ 3% | โ–ˆโ–ˆ | 1 / 31 [2.28s<70.62s, 0.44 count/s] tokenizer init ok
136
+ [I][ Init][ 26]: LLaMaEmbedSelector use mmap
137
+ 100% | โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ | 31 / 31 [26.47s<26.47s, 1.17 count/s] init post axmodel ok,remain_cmm(8947 MB)
138
+ [I][ Init][ 188]: max_token_len : 2047
139
+ [I][ Init][ 193]: kv_cache_size : 256, kv_cache_num: 2047
140
+ [I][ Init][ 201]: prefill_token_num : 128
141
+ [I][ Init][ 205]: grp: 1, prefill_max_token_num : 1
142
+ [I][ Init][ 205]: grp: 2, prefill_max_token_num : 128
143
+ [I][ Init][ 205]: grp: 3, prefill_max_token_num : 256
144
+ [I][ Init][ 205]: grp: 4, prefill_max_token_num : 384
145
+ [I][ Init][ 205]: grp: 5, prefill_max_token_num : 512
146
+ [I][ Init][ 205]: grp: 6, prefill_max_token_num : 640
147
+ [I][ Init][ 205]: grp: 7, prefill_max_token_num : 768
148
+ [I][ Init][ 205]: grp: 8, prefill_max_token_num : 896
149
+ [I][ Init][ 205]: grp: 9, prefill_max_token_num : 1024
150
+ [I][ Init][ 205]: grp: 10, prefill_max_token_num : 1152
151
+ [I][ Init][ 205]: grp: 11, prefill_max_token_num : 1280
152
+ [I][ Init][ 205]: grp: 12, prefill_max_token_num : 1408
153
+ [I][ Init][ 205]: grp: 13, prefill_max_token_num : 1536
154
+ [I][ Init][ 209]: prefill_max_token_num : 1536
155
+ [I][ load_config][ 282]: load config:
156
+ {
157
+ "enable_repetition_penalty": false,
158
+ "enable_temperature": true,
159
+ "enable_top_k_sampling": true,
160
+ "enable_top_p_sampling": false,
161
+ "penalty_window": 20,
162
+ "repetition_penalty": 1.2,
163
+ "temperature": 0.9,
164
+ "top_k": 10,
165
+ "top_p": 0.8
166
  }
167
 
168
+ [I][ Init][ 218]: LLM init ok
169
+ Type "q" to exit, Ctrl+c to stop current running
170
+ [I][ GenerateKVCachePrefill][ 271]: input token num : 16, prefill_split_num : 1 prefill_grpid : 2
171
+ [I][ GenerateKVCachePrefill][ 308]: input_num_token:16
172
+ [I][ main][ 230]: precompute_len: 16
173
+ [I][ main][ 231]: system_prompt:
174
+ prompt >> 1+2=?
175
+ [I][ SetKVCache][ 531]: prefill_grpid:2 kv_cache_num:128 precompute_len:16 input_num_token:8
176
+ [I][ SetKVCache][ 534]: current prefill_max_token_num:1408
177
+ [I][ Run][ 660]: input token num : 8, prefill_split_num : 1
178
+ [I][ Run][ 686]: input_num_token:8
179
+ [I][ Run][ 829]: ttft: 306.60 ms
180
+ <think>
181
+ Okay, the user has asked "1+2=?", which is a simple addition question.
182
+ I should provide the answer, but also consider if there's more to it.
183
+
184
+ Since the user specified "Qwen, created by Alibaba Cloud,"
185
+ maybe they're testing if I understand the context or need further assistance within that framework.
186
+
187
+ I'll give the correct sum and let them know if they need anything else. That should be helpful.
188
+ </think>
189
+
190
+ 1 + 2 equals **3**.
191
+
192
+ [N][ Run][ 943]: hit eos,avg 11.25 token/s
193
+
194
+ [I][ GetKVCache][ 500]: precompute_len:123, remaining:1413
195
+ prompt >> q
196
+ root@ax650:/mnt/qtang/llm-test/deepseek-r1-1.5b-ctx#
197
+ ```