Qwen3 EAGLE3 — Weighted Loss Variants
Collection
Qwen3-8B draft models collection for CMU 11-711 Course Project • 7 items • Updated
Our method. Per-step loss weighted by token acceptance rate: w_s = 1 + α(1 − β_s), α=1.0.
Part of a course project evaluating per-step weighted loss functions for training EAGLE3 draft models. Full pipeline and source: https://github.com/XLOverflow/anlp_course_project
Collection: Qwen3 EAGLE3 — Weighted Loss Variants
Qwen/Qwen3-8BAngelSlim/Qwen3-8B_eagle3scripts/data/ in project repo)baseline-uniform/epoch_4_step_82000scripts/train/profile_beta.py| Dataset | τ (accept. length) | Speedup | Accuracy |
|---|---|---|---|
| GSM8K | 7.359 | 4.588× | 95.15% |
| MATH500 | 7.326 | 4.606× | 95.20% |
Baselines for reference: Vanilla ≈ 1× speedup, EAGLE-orig ≈ 2× speedup.
model.safetensors — draft model weights (~763 MB)config.json — model configoutputs/eagle3-accrate/epoch_0_step_17026 in the original training outputOptimizer state (~3 GB) is not uploaded — use the project repo's training scripts to resume from scratch if needed.
from huggingface_hub import snapshot_download
draft_path = snapshot_download(repo_id="XLOverflow/qwen3-eagle3-accrate")
# Then load with EAGLE's EaModel — see scripts/eval/eval_combined.py in the project repo.
Base model
AngelSlim/Qwen3-8B_eagle3