Add Usage and Evaluation sections with K=3/5/7/10 benchmarks
Browse files
README.md
CHANGED
|
@@ -30,11 +30,66 @@ P-EAGLE follows the vanila EAGLE 3 using three layers of hidden states from the
|
|
| 30 |
Similar to [nvidia/gpt-oss-120b-Eagle3-long-context](https://huggingface.co/nvidia/gpt-oss-120b-Eagle3-long-context): only prompts from the datasets were used for data synthesis (the original responses from GPT were not used for data synthesis) which is then used to train the P-Eagle.
|
| 31 |
|
| 32 |
### Usage
|
| 33 |
-
To serve the checkpoint in [vLLM](https://github.com/vllm-project/vllm)
|
| 34 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 35 |
|
| 36 |
### Evaluation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 37 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 38 |
|
| 39 |
### Ciatation
|
| 40 |
```
|
|
@@ -44,4 +99,4 @@ To serve the checkpoint in [vLLM](https://github.com/vllm-project/vllm)
|
|
| 44 |
journal={arXiv preprint arXiv:2602.01469},
|
| 45 |
year={2026}
|
| 46 |
}
|
| 47 |
-
```
|
|
|
|
| 30 |
Similar to [nvidia/gpt-oss-120b-Eagle3-long-context](https://huggingface.co/nvidia/gpt-oss-120b-Eagle3-long-context): only prompts from the datasets were used for data synthesis (the original responses from GPT were not used for data synthesis) which is then used to train the P-Eagle.
|
| 31 |
|
| 32 |
### Usage
|
| 33 |
+
To serve the checkpoint in [vLLM](https://github.com/vllm-project/vllm):
|
| 34 |
|
| 35 |
+
> **Note:** GPT-OSS 20B uses hybrid attention (sliding window + full attention). When combined with the P-EAGLE drafter, a [KV cache grouping fix](https://github.com/vllm-project/vllm/pull/35062) is required for vLLM to correctly separate speculator layers into a dedicated KV cache group. Without this fix, vLLM will fail with a `validate_same_kv_cache_group` error. Apply the fix from the PR or use a vLLM version that includes it.
|
| 36 |
+
|
| 37 |
+
```
|
| 38 |
+
CUDA_VISIBLE_DEVICES=0 VLLM_USE_FLASHINFER_MOE_MXFP4_MXFP8=1 \
|
| 39 |
+
vllm serve openai/gpt-oss-20b \
|
| 40 |
+
--speculative-config '{"method": "eagle3", "model": "amazon/GPT-OSS-20B-P-EAGLE", "num_speculative_tokens": 7, "parallel_drafting": true}' \
|
| 41 |
+
--tp 1 \
|
| 42 |
+
--max-num-batched-tokens 32768 \
|
| 43 |
+
--kv-cache-dtype fp8 \
|
| 44 |
+
--async-scheduling \
|
| 45 |
+
--stream-interval 20 \
|
| 46 |
+
--max-cudagraph-capture-size 4096 \
|
| 47 |
+
--no-enable-prefix-caching \
|
| 48 |
+
--port 8050 \
|
| 49 |
+
--gpu-memory-utilization 0.9 \
|
| 50 |
+
--max-num-seqs 128 \
|
| 51 |
+
--max-model-len 32768
|
| 52 |
+
```
|
| 53 |
|
| 54 |
### Evaluation
|
| 55 |
+
From vllm-bench, with max-new-token of 2048, concurrency 1, and temperature 0 on a single H200 GPU (MXFP4 weights, FP8 KV cache):
|
| 56 |
+
|
| 57 |
+
**Acceptance Length**
|
| 58 |
+
|
| 59 |
+
| K | MT-Bench (80) | HumanEval (164) | GSM-8K (80) |
|
| 60 |
+
|----|---------------|-----------------|-------------|
|
| 61 |
+
| 3 | 2.75 | 2.96 | 2.83 |
|
| 62 |
+
| 5 | 3.01 | 3.57 | 3.26 |
|
| 63 |
+
| 7 | 3.30 | 3.80 | 3.44 |
|
| 64 |
+
| 10 | 3.46 | 3.88 | 3.72 |
|
| 65 |
+
|
| 66 |
+
**Throughput (output tok/s, concurrency=1)**
|
| 67 |
|
| 68 |
+
| K | MT-Bench | HumanEval | GSM-8K |
|
| 69 |
+
|----|----------|-----------|--------|
|
| 70 |
+
| 3 | 490 | 520 | 494 |
|
| 71 |
+
| 5 | 504 | 582 | 526 |
|
| 72 |
+
| 7 | 533 | 600 | 536 |
|
| 73 |
+
| 10 | 534 | 583 | 552 |
|
| 74 |
+
|
| 75 |
+
The command to run benchmarking is shown as below.
|
| 76 |
+
|
| 77 |
+
```
|
| 78 |
+
vllm bench serve \
|
| 79 |
+
--backend openai-chat \
|
| 80 |
+
--base-url http://localhost:8050 \
|
| 81 |
+
--endpoint /v1/chat/completions \
|
| 82 |
+
--model openai/gpt-oss-20b \
|
| 83 |
+
--dataset-name custom \
|
| 84 |
+
--dataset-path /home/ubuntu/eval_datasets/humaneval_custom.jsonl \
|
| 85 |
+
--custom-output-len 2048 \
|
| 86 |
+
--num-prompts 164 \
|
| 87 |
+
--max-concurrency 1 \
|
| 88 |
+
--request-rate inf \
|
| 89 |
+
--temperature 0 \
|
| 90 |
+
--save-result \
|
| 91 |
+
--save-detailed \
|
| 92 |
+
```
|
| 93 |
|
| 94 |
### Ciatation
|
| 95 |
```
|
|
|
|
| 99 |
journal={arXiv preprint arXiv:2602.01469},
|
| 100 |
year={2026}
|
| 101 |
}
|
| 102 |
+
```
|