AccRate (α=1.0) — EAGLE3 Draft Model for Qwen3-8B

Our method. Per-step loss weighted by token acceptance rate: w_s = 1 + α(1 − β_s), α=1.0.

Part of a course project evaluating per-step weighted loss functions for training EAGLE3 draft models. Full pipeline and source: https://github.com/XLOverflow/anlp_course_project

Collection: Qwen3 EAGLE3 — Weighted Loss Variants

Training

  • Framework: SpecForge (our fork: https://github.com/XLOverflow/SpecForge)
  • Target model: Qwen/Qwen3-8B
  • Draft init: AngelSlim/Qwen3-8B_eagle3
  • Data: ShareGPT-style reasoning traces (see scripts/data/ in project repo)
  • Loss weight: w_s = 1 + α(1 − β_s), α=1.0
  • Initialized from: baseline-uniform/epoch_4_step_82000
  • Additional epochs: 1
  • β_s profiled offline via scripts/train/profile_beta.py

Evaluation (Qwen3-8B target)

Dataset τ (accept. length) Speedup Accuracy
GSM8K 7.359 4.588× 95.15%
MATH500 7.326 4.606× 95.20%

Baselines for reference: Vanilla ≈ 1× speedup, EAGLE-orig ≈ 2× speedup.

Files

  • model.safetensors — draft model weights (~763 MB)
  • config.json — model config
  • Corresponds to: outputs/eagle3-accrate/epoch_0_step_17026 in the original training output

Optimizer state (~3 GB) is not uploaded — use the project repo's training scripts to resume from scratch if needed.

Usage

from huggingface_hub import snapshot_download
draft_path = snapshot_download(repo_id="XLOverflow/qwen3-eagle3-accrate")
# Then load with EAGLE's EaModel — see scripts/eval/eval_combined.py in the project repo.
Downloads last month
24
Safetensors
Model size
0.4B params
Tensor type
I64
·
BF16
·
BOOL
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for XLOverflow/qwen3-eagle3-accrate

Finetuned
(8)
this model

Collection including XLOverflow/qwen3-eagle3-accrate