EAGLE-3: Scaling up Inference Acceleration of Large Language Models via Training-Time Test
Paper • 2503.01840 • Published • 9
This is a speculator model designed for use with Qwen3-8B, based on the EAGLE-3 speculative decoding algorithm.
It was trained using the speculators library on a combination of the Magpie-Align/Magpie-Pro-300K-Filtered and the HuggingFaceH4/ultrachat_200k datasets.
The model was trained with thinking enabled.
This model should be used with the Qwen3-8B chat template, specifically through the /chat/completions endpoint.
vllm serve Qwen3-8B \
-tp 1 \
--speculative-config '{
"model": "RedHatAI/Qwen3-8B-Thinking-speculator.eagle3",
"num_speculative_tokens": 5,
"method": "eagle3"
}'
| Dataset | k=1 | k=2 | k=3 | k=4 | k=5 |
|---|---|---|---|---|---|
| HumanEval | 1.83 | 2.43 | 2.90 | 3.23 | 3.42 |
| math_reasoning | 1.85 | 2.53 | 3.04 | 3.44 | 3.74 |
| qa | 1.77 | 2.30 | 2.67 | 2.90 | 3.11 |
| question | 1.80 | 2.37 | 2.78 | 3.12 | 3.31 |
| rag | 1.77 | 2.30 | 2.69 | 2.94 | 3.11 |
| summarization | 1.70 | 2.15 | 2.42 | 2.59 | 2.69 |
| translation | 1.74 | 2.25 | 2.59 | 2.81 | 2.93 |
Configuration
Command
GUIDELLM__PREFERRED_ROUTE="chat_completions" \
GUIDELLM__MAX_CONCURRENCY=128 \
guidellm benchmark \
--target "http://localhost:8000/v1" \
--data "RedHatAI/speculator_benchmarks" \
--data-args '{"data_files": "HumanEval.jsonl"}' \
--rate-type throughput \
--max-seconds 300 \
--backend-args '{"extra_body": {"chat_completions": {"temperature": 0.0}}}'