Update README.md
#219
by
pezhmansamaniii
- opened
README.md
CHANGED
|
@@ -138,7 +138,7 @@ We slightly change their configs and tokenizers. Please use our setting to run t
|
|
| 138 |
| | AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | **87.6** |
|
| 139 |
| | ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | **92.3** |
|
| 140 |
| Code | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | **65.9** |
|
| 141 |
-
| | Codeforces (Percentile) |
|
| 142 |
| | Codeforces (Rating) | 717 | 759 | 1134 | 1820 | **2061** | 2029 |
|
| 143 |
| | SWE Verified (Resolved) | **50.8** | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 |
|
| 144 |
| | Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | **61.7** | 53.3 |
|
|
@@ -154,16 +154,20 @@ We slightly change their configs and tokenizers. Please use our setting to run t
|
|
| 154 |
|
| 155 |
### Distilled Model Evaluation
|
| 156 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 157 |
|
| 158 |
<div align="center">
|
| 159 |
|
| 160 |
| Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating |
|
| 161 |
|------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------|
|
| 162 |
-
| GPT-4o-0513 |
|
| 163 |
-
| Claude-3.5-Sonnet-1022 | 16.0 |
|
| 164 |
| o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | **1820** |
|
| 165 |
| QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 |
|
| 166 |
-
| DeepSeek-R1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 |
|
| 167 |
| DeepSeek-R1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 |
|
| 168 |
| DeepSeek-R1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 |
|
| 169 |
| DeepSeek-R1-Distill-Qwen-32B | **72.6** | 83.3 | 94.3 | 62.1 | 57.2 | 1691 |
|
|
@@ -201,7 +205,11 @@ You can also easily start a service using [SGLang](https://github.com/sgl-projec
|
|
| 201 |
```bash
|
| 202 |
python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --trust-remote-code --tp 2
|
| 203 |
```
|
|
|
|
|
|
|
| 204 |
|
|
|
|
|
|
|
| 205 |
### Usage Recommendations
|
| 206 |
|
| 207 |
**We recommend adhering to the following configurations when utilizing the DeepSeek-R1 series models, including benchmarking, to achieve the expected performance:**
|
|
|
|
| 138 |
| | AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | **87.6** |
|
| 139 |
| | ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | **92.3** |
|
| 140 |
| Code | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | **65.9** |
|
| 141 |
+
| | Codeforces (Percentile) | 73.3 | 23.6 | 58.7 | 93.4 | **96.6** | 96.3 |
|
| 142 |
| | Codeforces (Rating) | 717 | 759 | 1134 | 1820 | **2061** | 2029 |
|
| 143 |
| | SWE Verified (Resolved) | **50.8** | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 |
|
| 144 |
| | Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | **61.7** | 53.3 |
|
|
|
|
| 154 |
|
| 155 |
### Distilled Model Evaluation
|
| 156 |
|
| 157 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 158 |
+
|
| 159 |
+
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/DeepSeek-R1-Distill-Qwen-32B")
|
| 160 |
+
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-R1-Distill-Qwen-32B")
|
| 161 |
|
| 162 |
<div align="center">
|
| 163 |
|
| 164 |
| Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating |
|
| 165 |
|------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------|
|
| 166 |
+
| GPT-4o-0513 | 73.3 | 13.4 | 74.6 | 49.9 | 49.9 | 759 |
|
| 167 |
+
| Claude-3.5-Sonnet-1022 | 16.0 | 52.7 | 78.3 | 65.0 | 93.9 | 717 |
|
| 168 |
| o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | **1820** |
|
| 169 |
| QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 |
|
| 170 |
+
| DeepSeek-R1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 83.9 | 954 |
|
| 171 |
| DeepSeek-R1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 |
|
| 172 |
| DeepSeek-R1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 |
|
| 173 |
| DeepSeek-R1-Distill-Qwen-32B | **72.6** | 83.3 | 94.3 | 62.1 | 57.2 | 1691 |
|
|
|
|
| 205 |
```bash
|
| 206 |
python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --trust-remote-code --tp 2
|
| 207 |
```
|
| 208 |
+
# مثال اجرا با Hugging Face بدون نیاز به پیکربندی اضافی
|
| 209 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 210 |
|
| 211 |
+
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/DeepSeek-R1-Distill-Qwen-32B")
|
| 212 |
+
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-R1-Distill-Qwen-32B")
|
| 213 |
### Usage Recommendations
|
| 214 |
|
| 215 |
**We recommend adhering to the following configurations when utilizing the DeepSeek-R1 series models, including benchmarking, to achieve the expected performance:**
|