alexmarques's picture
Update README.md
dd06971 verified
metadata
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
  - neuralmagic
  - redhat
  - speculators
  - eagle3
  - qwen

Qwen3-14B-speculator.eagle3

Model Overview

  • Verifier: Qwen/Qwen3-14B
  • Speculative Decoding Algorithm: EAGLE-3
  • Model Architecture: Eagle3Speculator
  • Release Date: 09/18/2025
  • Version: 1.0
  • Model Developers: RedHat

This is a speculator model designed for use with Qwen/Qwen3-14B, based on the EAGLE-3 speculative decoding algorithm. It was trained using the speculators library on a combination of the Aeala/ShareGPT_Vicuna_unfiltered and the train_sft split of HuggingFaceH4/ultrachat_200k datasets. This model should be used with the Qwen/Qwen3-14B chat template, specifically through the /chat/completions endpoint.

Use with vLLM

vllm serve Qwen/Qwen3-14B \
  -tp 1 \
  --speculative-config '{
    "model": "RedHatAI/Qwen3-14B-speculator.eagle3",
    "num_speculative_tokens": 3,
    "method": "eagle3"
  }'

Evaluations

Use cases

Use Case Dataset Number of Samples
Coding HumanEval 168
Math Reasoning gsm8k 80
Text Summarization CNN/Daily Mail 80

Acceptance lengths

Use Case k=1 k=2 k=3 k=4 k=5 k=6 k=7
Coding 1.71 2.14 2.41 2.53 2.55 2.70 2.67
Math Reasoning 1.72 2.18 2.43 2.59 2.69 2.75 2.76
Text Summarization 1.60 1.90 2.06 2.14 2.17 2.19 2.21

Performance benchmarking (1xA100)

Coding
Math Reasoning
Text Summarization
Details Configuration
  • temperature: 0.6
  • top_p: 0.95
  • top_k: 20
  • repetitions: 3
  • time per experiment: 10min
  • hardware: 1xA100
  • vLLM version: 0.11.0
  • GuideLLM version: 0.3.0

Command

GUIDELLM__PREFERRED_ROUTE="chat_completions" \
guidellm benchmark \
  --target "http://localhost:8000/v1" \
  --data "RedHatAI/speculator_benchmarks" \
  --data-args '{"data_files": "HumanEval.jsonl"}' \
  --rate-type sweep \
  --max-seconds 600 \
  --output-path "Qwen3-14B-HumanEval.json" \
  --backend-args '{"extra_body": {"chat_completions": {"temperature":0.6, "top_p":0.95, "top_k":20}}}'

</details>