Update README.md
Browse files
README.md
CHANGED
|
@@ -99,8 +99,8 @@ The model was evaluated on AIME25 and GPQA Diamond benchmarks with `medium` reas
|
|
| 99 |
|
| 100 |
### Reproduction
|
| 101 |
|
| 102 |
-
The results of GPQA Diamond and AIME25 were obtained using [gpt_oss.evals](https://github.com/openai/gpt-oss/tree/main/gpt_oss/evals) with `medium` effort setting, and vLLM docker `rocm/vllm-
|
| 103 |
-
vLLM and
|
| 104 |
|
| 105 |
#### Launching server
|
| 106 |
|
|
@@ -122,7 +122,7 @@ vllm serve amd/gpt-oss-120b-w-mxfp4-a-fp8-qkvo-ptpc-fp8-kv-fp8-fp8attn \
|
|
| 122 |
```
|
| 123 |
export OPENAI_API_KEY="EMPTY"
|
| 124 |
|
| 125 |
-
python -m gpt_oss.evals --model amd/gpt-oss-120b-w-mxfp4-a-fp8-qkvo-ptpc-fp8-kv-fp8-fp8attn --eval aime25
|
| 126 |
```
|
| 127 |
|
| 128 |
# License
|
|
|
|
| 99 |
|
| 100 |
### Reproduction
|
| 101 |
|
| 102 |
+
The results of GPQA Diamond and AIME25 were obtained using [gpt_oss.evals](https://github.com/openai/gpt-oss/tree/main/gpt_oss/evals) with `medium` effort setting, and vLLM docker `rocm/vllm-private:mxfp4_fp8_gpt_oss_native_20251226`.
|
| 103 |
+
vLLM and AITER are already compiled and pre-installed in the Docker image, there is no need to download or install them again.
|
| 104 |
|
| 105 |
#### Launching server
|
| 106 |
|
|
|
|
| 122 |
```
|
| 123 |
export OPENAI_API_KEY="EMPTY"
|
| 124 |
|
| 125 |
+
python -m gpt_oss.evals --model amd/gpt-oss-120b-w-mxfp4-a-fp8-qkvo-ptpc-fp8-kv-fp8-fp8attn --eval gpqa,aime25 --reasoning-effort medium --n-threads 128
|
| 126 |
```
|
| 127 |
|
| 128 |
# License
|