wli1995 commited on
Commit
2007eaa
·
verified ·
1 Parent(s): eb98785

update README.md

Browse files
Files changed (1) hide show
  1. README.md +137 -211
README.md CHANGED
@@ -13,245 +13,171 @@ license: bsd-3-clause
13
 
14
  - Due to the current quantization scheme of w8a16, the CMM consumes about 7.6GiB of memory, so a 16GiB development board is required to run.
15
 
16
- ## Useful links:
17
- [Pulsar2 Link, How to Convert LLM from Huggingface to axmodel](https://pulsar2-docs.readthedocs.io/en/latest/appendix/build_llm.html)
18
-
19
- [AXera NPU LLM Runtime](https://github.com/AXERA-TECH/ax-llm)
20
-
21
-
22
- # Original Model Card for base model, DeepSeek-R1-Distill-Qwen-1.5B, below:
23
-
24
- # DeepSeek-R1
25
- <!-- markdownlint-disable first-line-h1 -->
26
- <!-- markdownlint-disable html -->
27
- <!-- markdownlint-disable no-duplicate-header -->
28
-
29
- <div align="center">
30
- <img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
31
- </div>
32
- <hr>
33
- <div align="center" style="line-height: 1;">
34
- <a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
35
- <img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
36
- </a>
37
- <a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
38
- <img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
39
- </a>
40
- <a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
41
- <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
42
- </a>
43
- </div>
44
-
45
- <div align="center" style="line-height: 1;">
46
- <a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
47
- <img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
48
- </a>
49
- <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
50
- <img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
51
- </a>
52
- <a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
53
- <img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
54
- </a>
55
- </div>
56
-
57
- <div align="center" style="line-height: 1;">
58
- <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE-CODE" style="margin: 2px;">
59
- <img alt="Code License" src="https://img.shields.io/badge/Code_License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
60
- </a>
61
- <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE-MODEL" style="margin: 2px;">
62
- <img alt="Model License" src="https://img.shields.io/badge/Model_License-Model_Agreement-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
63
- </a>
64
- </div>
65
-
66
-
67
- <p align="center">
68
- <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf"><b>Paper Link</b>👁️</a>
69
- </p>
70
-
71
-
72
- ## 1. Introduction
73
-
74
- We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1.
75
- DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning.
76
- With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors.
77
- However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance,
78
- we introduce DeepSeek-R1, which incorporates cold-start data before RL.
79
- DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.
80
- To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen. DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.
81
-
82
- **NOTE: Before running DeepSeek-R1 series models locally, we kindly recommend reviewing the [Usage Recommendation](#usage-recommendations) section.**
83
-
84
- <p align="center">
85
- <img width="80%" src="figures/benchmark.jpg">
86
- </p>
87
-
88
- ## 2. Model Summary
89
-
90
- ---
91
-
92
- **Post-Training: Large-Scale Reinforcement Learning on the Base Model**
93
-
94
- - We directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero. DeepSeek-R1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area.
95
-
96
- - We introduce our pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model's reasoning and non-reasoning capabilities.
97
- We believe the pipeline will benefit the industry by creating better models.
98
-
99
- ---
100
-
101
- **Distillation: Smaller Models Can Be Powerful Too**
102
-
103
- - We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open source DeepSeek-R1, as well as its API, will benefit the research community to distill better smaller models in the future.
104
- - Using the reasoning data generated by DeepSeek-R1, we fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. We open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community.
105
-
106
- ## 3. Model Downloads
107
-
108
- ### DeepSeek-R1 Models
109
-
110
- <div align="center">
111
-
112
- | **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
113
- | :------------: | :------------: | :------------: | :------------: | :------------: |
114
- | DeepSeek-R1-Zero | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero) |
115
- | DeepSeek-R1 | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1) |
116
-
117
- </div>
118
-
119
- DeepSeek-R1-Zero & DeepSeek-R1 are trained based on DeepSeek-V3-Base.
120
- For more details regarding the model architecture, please refer to [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repository.
121
-
122
- ### DeepSeek-R1-Distill Models
123
-
124
- <div align="center">
125
 
126
- | **Model** | **Base Model** | **Download** |
127
- | :------------: | :------------: | :------------: |
128
- | DeepSeek-R1-Distill-Qwen-1.5B | [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) |
129
- | DeepSeek-R1-Distill-Qwen-7B | [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) |
130
- | DeepSeek-R1-Distill-Llama-8B | [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) |
131
- | DeepSeek-R1-Distill-Qwen-14B | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) |
132
- |DeepSeek-R1-Distill-Qwen-32B | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) |
133
- | DeepSeek-R1-Distill-Llama-70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) |
134
 
135
- </div>
136
 
137
- DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1.
138
- We slightly change their configs and tokenizers. Please use our setting to run these models.
139
 
140
- ## 4. Evaluation Results
141
-
142
- ### DeepSeek-R1-Evaluation
143
- For all our models, the maximum generation length is set to 32,768 tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 64 responses per query to estimate pass@1.
144
- <div align="center">
145
-
146
-
147
- | Category | Benchmark (Metric) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 |
148
- |----------|-------------------|----------------------|------------|--------------|----------------|------------|--------------|
149
- | | Architecture | - | - | MoE | - | - | MoE |
150
- | | # Activated Params | - | - | 37B | - | - | 37B |
151
- | | # Total Params | - | - | 671B | - | - | 671B |
152
- | English | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | **91.8** | 90.8 |
153
- | | MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | **92.9** |
154
- | | MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | **84.0** |
155
- | | DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | **92.2** |
156
- | | IF-Eval (Prompt Strict) | **86.5** | 84.3 | 86.1 | 84.8 | - | 83.3 |
157
- | | GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | **75.7** | 71.5 |
158
- | | SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | **47.0** | 30.1 |
159
- | | FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | **82.5** |
160
- | | AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | **87.6** |
161
- | | ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | **92.3** |
162
- | Code | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | **65.9** |
163
- | | Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | **96.6** | 96.3 |
164
- | | Codeforces (Rating) | 717 | 759 | 1134 | 1820 | **2061** | 2029 |
165
- | | SWE Verified (Resolved) | **50.8** | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 |
166
- | | Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | **61.7** | 53.3 |
167
- | Math | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | **79.8** |
168
- | | MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | **97.3** |
169
- | | CNMO 2024 (Pass@1) | 13.1 | 10.8 | 43.2 | 67.6 | - | **78.8** |
170
- | Chinese | CLUEWSC (EM) | 85.4 | 87.9 | 90.9 | 89.9 | - | **92.8** |
171
- | | C-Eval (EM) | 76.7 | 76.0 | 86.5 | 68.9 | - | **91.8** |
172
- | | C-SimpleQA (Correct) | 55.4 | 58.7 | **68.0** | 40.3 | - | 63.7 |
173
-
174
- </div>
175
 
 
176
 
177
- ### Distilled Model Evaluation
178
 
 
179
 
180
- <div align="center">
181
 
182
- | Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating |
183
- |------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------|
184
- | GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 |
185
- | Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 |
186
- | o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | **1820** |
187
- | QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 |
188
- | DeepSeek-R1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 16.9 | 954 |
189
- | DeepSeek-R1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 |
190
- | DeepSeek-R1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 |
191
- | DeepSeek-R1-Distill-Qwen-32B | **72.6** | 83.3 | 94.3 | 62.1 | 57.2 | 1691 |
192
- | DeepSeek-R1-Distill-Llama-8B | 50.4 | 80.0 | 89.1 | 49.0 | 39.6 | 1205 |
193
- | DeepSeek-R1-Distill-Llama-70B | 70.0 | **86.7** | **94.5** | **65.2** | **57.5** | 1633 |
 
 
 
 
 
 
194
 
195
- </div>
196
 
 
 
 
 
 
 
 
 
 
 
197
 
198
- ## 5. Chat Website & API Platform
199
- You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com), and switch on the button "DeepThink"
200
 
201
- We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/)
202
 
203
- ## 6. How to Run Locally
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
204
 
205
- ### DeepSeek-R1 Models
206
 
207
- Please visit [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repo for more information about running DeepSeek-R1 locally.
208
 
209
- ### DeepSeek-R1-Distill Models
 
 
 
210
 
211
- DeepSeek-R1-Distill models can be utilized in the same manner as Qwen or Llama models.
212
 
213
- For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm):
 
 
214
 
215
- ```shell
216
- vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager
 
 
 
 
 
 
 
 
 
 
217
  ```
218
 
219
- You can also easily start a service using [SGLang](https://github.com/sgl-project/sglang)
220
 
221
- ```bash
222
- python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --trust-remote-code --tp 2
223
- ```
224
 
225
- ### Usage Recommendations
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
226
 
227
- **We recommend adhering to the following configurations when utilizing the DeepSeek-R1 series models, including benchmarking, to achieve the expected performance:**
 
 
 
 
 
 
 
 
 
 
 
 
 
228
 
229
- 1. Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs.
230
- 2. **Avoid adding a system prompt; all instructions should be contained within the user prompt.**
231
- 3. For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \boxed{}."
232
- 4. When evaluating model performance, it is recommended to conduct multiple tests and average the results.
233
 
234
- ## 7. License
235
- This code repository and the model weights are licensed under the [MIT License](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE).
236
- DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that:
237
- - DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE), and now finetuned with 800k samples curated with DeepSeek-R1.
238
- - DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE).
239
- - DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE).
240
 
241
- ## 8. Citation
242
- ```
243
- @misc{deepseekai2025deepseekr1incentivizingreasoningcapability,
244
- title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning},
245
- author={DeepSeek-AI and Daya Guo and Dejian Yang and Haowei Zhang and Junxiao Song and Ruoyu Zhang and Runxin Xu and Qihao Zhu and Shirong Ma and Peiyi Wang and Xiao Bi and Xiaokang Zhang and Xingkai Yu and Yu Wu and Z. F. Wu and Zhibin Gou and Zhihong Shao and Zhuoshu Li and Ziyi Gao and Aixin Liu and Bing Xue and Bingxuan Wang and Bochao Wu and Bei Feng and Chengda Lu and Chenggang Zhao and Chengqi Deng and Chenyu Zhang and Chong Ruan and Damai Dai and Deli Chen and Dongjie Ji and Erhang Li and Fangyun Lin and Fucong Dai and Fuli Luo and Guangbo Hao and Guanting Chen and Guowei Li and H. Zhang and Han Bao and Hanwei Xu and Haocheng Wang and Honghui Ding and Huajian Xin and Huazuo Gao and Hui Qu and Hui Li and Jianzhong Guo and Jiashi Li and Jiawei Wang and Jingchang Chen and Jingyang Yuan and Junjie Qiu and Junlong Li and J. L. Cai and Jiaqi Ni and Jian Liang and Jin Chen and Kai Dong and Kai Hu and Kaige Gao and Kang Guan and Kexin Huang and Kuai Yu and Lean Wang and Lecong Zhang and Liang Zhao and Litong Wang and Liyue Zhang and Lei Xu and Leyi Xia and Mingchuan Zhang and Minghua Zhang and Minghui Tang and Meng Li and Miaojun Wang and Mingming Li and Ning Tian and Panpan Huang and Peng Zhang and Qiancheng Wang and Qinyu Chen and Qiushi Du and Ruiqi Ge and Ruisong Zhang and Ruizhe Pan and Runji Wang and R. J. Chen and R. L. Jin and Ruyi Chen and Shanghao Lu and Shangyan Zhou and Shanhuang Chen and Shengfeng Ye and Shiyu Wang and Shuiping Yu and Shunfeng Zhou and Shuting Pan and S. S. Li and Shuang Zhou and Shaoqing Wu and Shengfeng Ye and Tao Yun and Tian Pei and Tianyu Sun and T. Wang and Wangding Zeng and Wanjia Zhao and Wen Liu and Wenfeng Liang and Wenjun Gao and Wenqin Yu and Wentao Zhang and W. L. Xiao and Wei An and Xiaodong Liu and Xiaohan Wang and Xiaokang Chen and Xiaotao Nie and Xin Cheng and Xin Liu and Xin Xie and Xingchao Liu and Xinyu Yang and Xinyuan Li and Xuecheng Su and Xuheng Lin and X. Q. Li and Xiangyue Jin and Xiaojin Shen and Xiaosha Chen and Xiaowen Sun and Xiaoxiang Wang and Xinnan Song and Xinyi Zhou and Xianzu Wang and Xinxia Shan and Y. K. Li and Y. Q. Wang and Y. X. Wei and Yang Zhang and Yanhong Xu and Yao Li and Yao Zhao and Yaofeng Sun and Yaohui Wang and Yi Yu and Yichao Zhang and Yifan Shi and Yiliang Xiong and Ying He and Yishi Piao and Yisong Wang and Yixuan Tan and Yiyang Ma and Yiyuan Liu and Yongqiang Guo and Yuan Ou and Yuduan Wang and Yue Gong and Yuheng Zou and Yujia He and Yunfan Xiong and Yuxiang Luo and Yuxiang You and Yuxuan Liu and Yuyang Zhou and Y. X. Zhu and Yanhong Xu and Yanping Huang and Yaohui Li and Yi Zheng and Yuchen Zhu and Yunxian Ma and Ying Tang and Yukun Zha and Yuting Yan and Z. Z. Ren and Zehui Ren and Zhangli Sha and Zhe Fu and Zhean Xu and Zhenda Xie and Zhengyan Zhang and Zhewen Hao and Zhicheng Ma and Zhigang Yan and Zhiyu Wu and Zihui Gu and Zijia Zhu and Zijun Liu and Zilin Li and Ziwei Xie and Ziyang Song and Zizheng Pan and Zhen Huang and Zhipeng Xu and Zhongyu Zhang and Zhen Zhang},
246
- year={2025},
247
- eprint={2501.12948},
248
- archivePrefix={arXiv},
249
- primaryClass={cs.CL},
250
- url={https://arxiv.org/abs/2501.12948},
251
- }
252
 
253
  ```
254
 
255
- ## 9. Contact
256
- If you have any questions, please raise an issue or contact us at [service@deepseek.com](service@deepseek.com).
257
-
 
13
 
14
  - Due to the current quantization scheme of w8a16, the CMM consumes about 7.6GiB of memory, so a 16GiB development board is required to run.
15
 
16
+ ## Feature
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
 
18
+ - Support for longer contexts, in this sample it's 2k
19
+ - Support context dialogue
20
+ - System prompt kvcache is supported
 
 
 
 
 
21
 
22
+ ## Convert tools links:
23
 
24
+ For those who are interested in model conversion, you can try to export axmodel through the original repo : https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B and https://huggingface.co/jakiAJK/DeepSeek-R1-Distill-Qwen-7B_GPTQ-int4
 
25
 
26
+ [Pulsar2 Link, How to Convert LLM from Huggingface to axmodel](https://pulsar2-docs.readthedocs.io/en/latest/appendix/build_llm.html)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
 
28
+ [AXera NPU AXEngine LLM Runtime](https://github.com/AXERA-TECH/ax-llm/tree/ax-context)
29
 
30
+ [AXera NPU AXCL LLM Runtime](https://github.com/AXERA-TECH/ax-llm/tree/axcl-context)
31
 
32
+ ### Convert script
33
 
34
+ The follow show how to convert DeepSeek-R1-Distill-Qwen-7B
35
 
36
+ ```
37
+ pulsar2 llm_build --input_path deepseek-ai/DeepSeek-R1-Distill-Qwen-7B \
38
+ --output_path deepseek-ai/DeepSeek-R1-Distill-Qwen-7B-ax650 \
39
+ --hidden_state_type bf16 --kv_cache_len 2047 --prefill_len 128 \
40
+ --last_kv_cache_len 128 \
41
+ --last_kv_cache_len 256 \
42
+ --last_kv_cache_len 384 \
43
+ --last_kv_cache_len 512 \
44
+ --last_kv_cache_len 640 \
45
+ --last_kv_cache_len 768 \
46
+ --last_kv_cache_len 896 \
47
+ --last_kv_cache_len 1024 \
48
+ --last_kv_cache_len 1152 \
49
+ --last_kv_cache_len 1280 \
50
+ --last_kv_cache_len 1408 \
51
+ --last_cache_len 1536 \
52
+ --chip AX650 -c 1 --parallel 8
53
+ ```
54
 
55
+ ## Support Platform
56
 
57
+ - AX650
58
+ - AX650N DEMO Board
59
+ - [M4N-Dock(爱芯派Pro)](https://wiki.sipeed.com/hardware/zh/maixIV/m4ndock/m4ndock.html)
60
+ - [M.2 Accelerator card](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html)
61
+ - AX630C
62
+ - *TBD*
63
+
64
+ |Chips|w8a16|w4a16|
65
+ |--|--|--|
66
+ |AX650| 2.6 tokens/sec| 4.8 tokens/sec |
67
 
68
+ ## How to use
 
69
 
70
+ Download all files from this repository to the device
71
 
72
+ ```
73
+ root@ax650:~/wangli/huggingface/DeepSeek-R1-Distill-Qwen-7B# tree -L 1
74
+ .
75
+ |-- README.md
76
+ |-- config.json
77
+ |-- deepseek-r1-7b-ax650
78
+ |-- deepseek-r1-7b-int4-ax650
79
+ |-- deepseek-r1_tokenizer
80
+ |-- deepseek-r1_tokenizer.py
81
+ |-- main_ax650
82
+ |-- main_axcl_aarch64
83
+ |-- main_axcl_x86
84
+ |-- post_config.json
85
+ |-- run_deepseek-r1_7b_ax650.sh
86
+ |-- run_deepseek-r1_7b_axcl_aarch64.sh
87
+ |-- run_deepseek-r1_7b_axcl_x86.sh
88
+ |-- run_deepseek-r1_7b_int4_ax650.sh
89
+ |-- run_deepseek-r1_7b_int4_axcl_aarch64.sh
90
+ `-- run_deepseek-r1_7b_int4_axcl_x86.sh
91
+
92
+ 3 directories, 13 files
93
 
94
+ ```
95
 
96
+ #### Start the Tokenizer service
97
 
98
+ ```
99
+ root@ax650:~/wangli/huggingface/DeepSeek-R1-Distill-Qwen-7B# python3 deepseek-r1_tokenizer.py
100
+ Server running at http://0.0.0.0:12345
101
+ ```
102
 
103
+ #### System prompt cache
104
 
105
+ - The System prompt can be preset through the configuration file from `--system_prompt`
106
+ - The System prompt can be cached in the form of kv cache to a specified folder for quick loading at the next run time from `--kvcache_path`
107
+ - This folder needs to be created manually before running, for example `mkdir kvcache`
108
 
109
+ ```
110
+ root@ax650:~/wangli/huggingface/DeepSeek-R1-Distill-Qwen-7B# cat ./run_deepseek-r1_7b_int4_ax650.sh
111
+ ./main_ax650 \
112
+ --template_filename_axmodel "deepseek-r1-7b-int4-ax650/qwen2_p128_l%d_together.axmodel" \
113
+ --axmodel_num 28 \
114
+ --url_tokenizer_model "http://127.0.0.1:12345" \
115
+ --filename_post_axmodel "deepseek-r1-7b-int4-ax650/qwen2_post.axmodel" \
116
+ --filename_tokens_embed "deepseek-r1-7b-int4-ax650/model.embed_tokens.weight.bfloat16.bin" \
117
+ --tokens_embed_num 152064 \
118
+ --tokens_embed_size 3584 \
119
+ --use_mmap_load_embed 1 \
120
+ --live_print 1
121
  ```
122
 
123
+ #### Inference with AX650 Host, such as M4N-Dock(爱芯派Pro) or AX650N DEMO Board
124
 
125
+ Open another terminal and run `run_deepseek-r1_7b_int4_ax650.sh`
 
 
126
 
127
+ ```
128
+ root@ax650:~/huggingface/DeepSeek-R1-Distill-Qwen-7B# ./run_deepseek-r1_7b_int4_ax650.sh
129
+ [I][ Init][ 110]: LLM init start
130
+ [I][ Init][ 34]: connect http://127.0.0.1:12345 ok
131
+ [I][ Init][ 57]: uid: e034d25e-4fcb-4c3b-b19a-df31c278d9a8
132
+ bos_id: 151646, eos_id: 151643
133
+ 3% | ██ | 1 / 31 [2.16s<67.02s, 0.46 count/s] tokenizer init ok[I][ Init][ 26]: LLaMaEmbedSelector use mmap
134
+ 100% | ████████████████████████████████ | 31 / 31 [21.75s<21.75s, 1.43 count/s] init post axmodel ok,remain_cmm(4189 MB)[I][ Init][ 188]: max_token_len : 2047
135
+ [I][ Init][ 193]: kv_cache_size : 512, kv_cache_num: 2047
136
+ [I][ Init][ 201]: prefill_token_num : 128
137
+ [I][ Init][ 205]: grp: 1, prefill_max_token_num : 1
138
+ [I][ Init][ 205]: grp: 2, prefill_max_token_num : 128
139
+ [I][ Init][ 205]: grp: 3, prefill_max_token_num : 256
140
+ [I][ Init][ 205]: grp: 4, prefill_max_token_num : 384
141
+ [I][ Init][ 205]: grp: 5, prefill_max_token_num : 512
142
+ [I][ Init][ 205]: grp: 6, prefill_max_token_num : 640
143
+ [I][ Init][ 205]: grp: 7, prefill_max_token_num : 768
144
+ [I][ Init][ 205]: grp: 8, prefill_max_token_num : 896
145
+ [I][ Init][ 205]: grp: 9, prefill_max_token_num : 1024
146
+ [I][ Init][ 209]: prefill_max_token_num : 1024
147
+ [I][ load_config][ 282]: load config:
148
+ {
149
+ "enable_repetition_penalty": false,
150
+ "enable_temperature": true,
151
+ "enable_top_k_sampling": true,
152
+ "enable_top_p_sampling": false,
153
+ "penalty_window": 20,
154
+ "repetition_penalty": 1.2,
155
+ "temperature": 0.9,
156
+ "top_k": 10,
157
+ "top_p": 0.8
158
+ }
159
 
160
+ [I][ Init][ 218]: LLM init ok
161
+ Type "q" to exit, Ctrl+c to stop current running
162
+ [I][ GenerateKVCachePrefill][ 275]: input token num : 13, prefill_split_num : 1 prefill_grpid : 2
163
+ [I][ GenerateKVCachePrefill][ 315]: input_num_token:13
164
+ [I][ main][ 228]: precompute_len: 13
165
+ [I][ main][ 229]: system_prompt:
166
+ prompt >> 你是谁
167
+ [I][ SetKVCache][ 529]: prefill_grpid:2 kv_cache_num:128 precompute_len:13 input_num_token:6
168
+ [I][ SetKVCache][ 532]: current prefill_max_token_num:896
169
+ [I][ Run][ 658]: input token num : 6, prefill_split_num : 1
170
+ [I][ Run][ 684]: input_num_token:6
171
+ [I][ Run][ 807]: ttft: 764.85 ms
172
+ Alright, the user greeted me by saying, "You are DeepSeek. You are a helpful assistant." I need to respond in a friendly and professional manner. I should acknowledge that I'm DeepSeek, an AI assistant, and offer assistance. I'll keep it concise and welcoming.
173
+ </think>
174
 
175
+ 您好!我是DeepSeek,一个由深度求索公司开发的智能助手。我随时准备为您提供帮助和解答。请问有什么可以为您服务的?
 
 
 
176
 
177
+ [N][ Run][ 921]: hit eos,avg 4.87 token/s
 
 
 
 
 
178
 
179
+ [I][ GetKVCache][ 498]: precompute_len:110, remaining:914
180
+ prompt >> q
 
 
 
 
 
 
 
 
 
181
 
182
  ```
183