Update README.md
Browse files
README.md
CHANGED
|
@@ -51,8 +51,7 @@ This model can be deployed efficiently using the [SGLang](https://docs.sglang.ai
|
|
| 51 |
|
| 52 |
## Evaluation
|
| 53 |
|
| 54 |
-
The model was evaluated on AIME24, GPQA Diamond, MATH-500, and GSM8K benchmarks. The tasks of AIME24, GPQA Diamond, and MATH-500 were conducted using the [lighteval](https://github.com/huggingface/lighteval/tree/v0.10.0) framework
|
| 55 |
-
The task of GSM8K was conducted
|
| 56 |
|
| 57 |
### Accuracy
|
| 58 |
|
|
@@ -112,7 +111,7 @@ The task of GSM8K was conducted
|
|
| 112 |
|
| 113 |
### Reproduction
|
| 114 |
|
| 115 |
-
The results of AIME24, MATH-500, and GPQA Diamond, were obtained using the following commands, with custom tasks and 10 rounds using different random seeds.
|
| 116 |
|
| 117 |
```
|
| 118 |
MODEL_ARGS="model_name=amd/DeepSeek-R1-0528-MXFP4-ASQ,dtype=bfloat16,tensor_parallel_size=8,max_model_length=71536,max_num_batched_tokens=32768,gpu_memory_utilization=0.85,generation_parameters={max_new_tokens:65536,temperature:0.6,top_p:0.95,seed:$SEED}"
|
|
@@ -125,7 +124,7 @@ lighteval vllm $MODEL_ARGS "custom|aime24_single|0|0,custom|math_500_single|0|0,
|
|
| 125 |
2>&1 | tee -a "$LOG"
|
| 126 |
```
|
| 127 |
|
| 128 |
-
The result of
|
| 129 |
|
| 130 |
```
|
| 131 |
MODEL_ARGS="model=amd/DeepSeek-R1-0528-MXFP4-ASQ,base_url=http://localhost:8000/v1/completions,num_concurrent=999999,timeout=999999,tokenized_requests=False,max_length=38768,temperature=0.6,top_p=0.95,add_bos_token=True,seed=$SEED"
|
|
|
|
| 51 |
|
| 52 |
## Evaluation
|
| 53 |
|
| 54 |
+
The model was evaluated on AIME24, GPQA Diamond, MATH-500, and GSM8K benchmarks. The tasks of AIME24, GPQA Diamond, and MATH-500 were conducted using the [lighteval](https://github.com/huggingface/lighteval/tree/v0.10.0) framework while GSM8K using [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness).
|
|
|
|
| 55 |
|
| 56 |
### Accuracy
|
| 57 |
|
|
|
|
| 111 |
|
| 112 |
### Reproduction
|
| 113 |
|
| 114 |
+
The results of AIME24, MATH-500, and GPQA Diamond, were obtained using the following commands, with custom tasks and 10 rounds using different random seeds for reliable performance estimation.
|
| 115 |
|
| 116 |
```
|
| 117 |
MODEL_ARGS="model_name=amd/DeepSeek-R1-0528-MXFP4-ASQ,dtype=bfloat16,tensor_parallel_size=8,max_model_length=71536,max_num_batched_tokens=32768,gpu_memory_utilization=0.85,generation_parameters={max_new_tokens:65536,temperature:0.6,top_p:0.95,seed:$SEED}"
|
|
|
|
| 124 |
2>&1 | tee -a "$LOG"
|
| 125 |
```
|
| 126 |
|
| 127 |
+
The result of GSM8K was obtained using [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness) and the following commands.
|
| 128 |
|
| 129 |
```
|
| 130 |
MODEL_ARGS="model=amd/DeepSeek-R1-0528-MXFP4-ASQ,base_url=http://localhost:8000/v1/completions,num_concurrent=999999,timeout=999999,tokenized_requests=False,max_length=38768,temperature=0.6,top_p=0.95,add_bos_token=True,seed=$SEED"
|