File size: 11,970 Bytes
9248811 fb6ef3d 8c36afc fb6ef3d 9248811 ae73168 9248811 4962383 9248811 4962383 9248811 fb6ef3d ae73168 fb6ef3d 4962383 fb6ef3d 4962383 fb6ef3d 4962383 fb6ef3d 4962383 fb6ef3d 4962383 fb6ef3d 4962383 fb6ef3d 4962383 fb6ef3d 4962383 fb6ef3d 4962383 9248811 fb6ef3d 9248811 fb6ef3d ae73168 9248811 fb6ef3d 9248811 fb6ef3d 9248811 fb6ef3d 9248811 fb6ef3d 9248811 8c36afc 9248811 8c36afc 9248811 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 |
---
language:
- en
license: apache-2.0
tags:
- speculative-decoding
- eagle
- aurora
- inference-time-training
- code-generation
base_model: Qwen/Qwen3-Coder-Next-FP8
pipeline_tag: text-generation
---
# Aurora-Spec-Qwen3-Coder-Next-FP8
<div align="center">
[](https://aurora-spec-ai.github.io)
[](#)
[](https://huggingface.co/datasets/zelc/onlinesd)
[](https://arxiv.org/abs/2602.06932)
</div>
## Model Description
This is an EAGLE3 draft model trained **from scratch (random initialization)** using the **Aurora** inference-time training framework for speculative decoding. Unlike traditional approaches that fine-tune pre-trained models, this model is built entirely through Aurora's online training process. The model is optimized to generate high-quality draft tokens for the [Qwen/Qwen3-Coder-Next-FP8](https://huggingface.co/Qwen/Qwen3-Coder-Next-FP8) target model, achieving significant speedups in code generation tasks.
## Key Features
- **Training Approach**: Trained **from scratch** (random initialization) - no pre-training required
- **Framework**: Trained with Aurora - an advanced inference-time training system
- **Architecture**: EAGLE3 speculative decoding draft model
- **Target Model**: [Qwen/Qwen3-Coder-Next-FP8](https://huggingface.co/Qwen/Qwen3-Coder-Next-FP8)
- **Training Data**: [OnlineSD Code Dataset](https://huggingface.co/datasets/zelc/onlinesd/viewer/code)
- **Performance**: Achieves **3.1x average accept length** for speculative decoding
- **Training**: 10,000 training steps over 80,000 inference requests
## Target Model
This draft model is specifically designed to work with:
- **Model**: [Qwen/Qwen3-Coder-Next-FP8](https://huggingface.co/Qwen/Qwen3-Coder-Next-FP8)
- **Type**: Code generation language model
- **Precision**: FP8 quantized
- **Domain**: Programming and code synthesis
The draft model learns to predict the target model's token distribution during inference-time training, enabling efficient speculative decoding.
## Architecture
### EAGLE3 Speculative Decoding
This model implements the EAGLE3 (Extrapolation Algorithm for Greater Language-model Efficiency) architecture:
- **Draft Model**: Lightweight model that generates candidate tokens
- **Tree-based Attention**: Enables parallel verification of multiple draft tokens
- **Auto-regressive Generation**: Produces speculative token sequences
- **Dynamic Adaptation**: Updates during inference to match target model distribution
### Model Structure
- **Initialization**: Trained from scratch (random initialization, no pre-training)
- **Base Architecture**: Single-layer Transformer decoder
- **Precision**: FP8 (8-bit floating point)
- **Speculative Steps**: 5 tokens per iteration
- **Attention Mechanism**: Tree-based for parallel draft verification
- **Training Paradigm**: Online learning during inference (Aurora framework)
## Training Details
### Aurora Framework
This model was trained **from scratch** using Aurora, an inference-time training framework that:
- **No Pre-training Required**: Starts from random initialization and learns entirely through online training
- Updates the draft model dynamically during inference
- Uses reverse KL divergence for distribution matching (minimizing KL(target || draft))
- Employs online learning with periodic model updates
- Optimizes for both draft quality and speculative acceptance rate
- Demonstrates that effective draft models can be built from scratch without expensive pre-training
### Training Configuration
- **Hardware**: NVIDIA H200 GPU
- **Training Steps**: 10,000 steps over 80,000 inference requests
- **Learning Rate**: 1e-4
- **TTT Length**: 5 tokens
- **Speculative Steps**: 5
- **Update Interval**: Every 10 requests
- **Loss Weights**:
- NTP Loss: 1.0
- Prediction Loss: 1.0
- **KL Divergence**: Reverse KL divergence (draft โ target)
### Dataset
Trained on the [OnlineSD Code Dataset](https://huggingface.co/datasets/zelc/onlinesd/viewer/code), which contains diverse coding examples suitable for training speculative decoding models.
## Benchmarks
### End-to-End Throughput Performance
Measured on a holdout dataset from the [OnlineSD Code Dataset](https://huggingface.co/datasets/zelc/onlinesd/viewer/code) using the final Aurora checkpoint.
**Qwen-Coder-Next: end-to-end throughput under varying batch size and lookahead**
We report tokens-per-second (TPS) statistics and speedup relative to the no-speculation baseline.
| BS | Config | Mean TPS | P50 TPS | P05 TPS | P95 TPS | Speedup (Mean) | Acc Len |
|:---:|:---------|:--------:|:-------:|:-------:|:-------:|:--------------:|:-------:|
| **1** | w/o spec | 176.4 | 178.0 | 172.3 | 178.4 | -- | -- |
| | lookahead 3 | 252.1 | 254.8 | 208.8 | 291.6 | 1.43ร | 2.67 |
| | lookahead 4 | 263.1 | 264.0 | 211.8 | 312.7 | 1.49ร | 2.91 |
| | **lookahead 5** | **265.7** | **264.8** | **208.7** | **320.5** | **1.51ร** | **3.06** |
| **8** | w/o spec | 119.8 | 121.5 | 104.8 | 134.6 | -- | -- |
| | lookahead 3 | 141.0 | 138.9 | 110.4 | 178.5 | 1.18ร | 2.67 |
| | lookahead 4 | 142.5 | 141.2 | 110.3 | 181.6 | 1.19ร | 2.91 |
| | **lookahead 5** | **146.3** | **143.5** | **109.6** | **189.5** | **1.23ร** | **3.07** |
| **16** | w/o spec | 99.6 | 102.1 | 74.5 | 119.2 | -- | -- |
| | lookahead 3 | 104.0 | 100.5 | 75.6 | 151.9 | 1.04ร | 2.67 |
| | lookahead 4 | 105.6 | 101.1 | 77.5 | 149.7 | 1.06ร | 2.92 |
| | **lookahead 5** | **107.6** | **103.7** | **75.7** | **156.6** | **1.09ร** | **3.06** |
| **32** | w/o spec | 85.0 | 88.7 | 54.5 | 104.5 | -- | -- |
| | lookahead 3 | 78.9 | 72.8 | 53.0 | 122.3 | 0.93ร | 2.68 |
| | lookahead 4 | 79.5 | 73.7 | 52.9 | 124.7 | 0.94ร | 2.91 |
| | lookahead 5 | 80.3 | 72.6 | 52.8 | 130.7 | 0.94ร | 3.06 |
### Performance Across Different Batch Sizes
Aurora provides the **largest gains at small-to-moderate batch sizes**, with up to **1.51ร speedup at batch size 1**, demonstrating the effectiveness of speculative decoding for latency-critical scenarios. The benefits diminish as batch size increases:
- **Batch Size 1** (Best Case): Up to **1.51ร speedup** with lookahead 5 configuration (3.06 average accept length). At low batch sizes, the cost of draft generation and verification is well amortized by reduced target model forward passes.
- **Batch Size 8** (Moderate): **1.23ร speedup** with lookahead 5 configuration (3.07 average accept length). Speculative decoding still provides meaningful throughput improvements for moderate batching.
- **Batch Size 16** (Diminishing Returns): **1.09ร speedup** with lookahead 5 configuration (3.06 average accept length). Benefits become marginal as verification overhead increases relative to baseline throughput.
- **Batch Size 32** (Negative Returns): At large batch sizes, **verification overhead dominates** and speculative decoding becomes slightly slower than the baseline (0.93-0.94ร). The target model's batch processing efficiency outweighs the benefits of skipping forward passes.
**Metrics Explained**:
- **TPS**: Tokens per second (throughput)
- **Acc Len**: Average accept length (number of draft tokens accepted per verification step)
- **Speedup**: Relative to the no-speculation baseline
- **P05/P95**: 5th and 95th percentile throughput values
**Notably**, this performance is achieved with a model trained **from scratch** - it learns entirely through Aurora's online training process, demonstrating the effectiveness of inference-time training without expensive pre-training.
## Usage
This model is designed to be used as a draft model in EAGLE3 speculative decoding pipelines with Qwen3-Coder as the target model.
### Example 1: Python API (Offline Batch Inference)
```python
import sglang as sgl
def main():
# Sample prompts
prompts = [
"Write a Python function to compute fibonacci numbers:",
"Implement a binary search algorithm in Python:",
"Create a class for a binary tree in Python:",
]
# Create sampling params
sampling_params = {"temperature": 0.7, "max_new_tokens": 256}
# Initialize engine with speculative decoding
llm = sgl.Engine(
model_path="Qwen/Qwen3-Coder-Next-FP8",
speculative_draft_model_path="togethercomputer/Aurora-Spec-Qwen3-Coder-Next-FP8",
speculative_algorithm="EAGLE",
speculative_num_steps=5,
speculative_eagle_topk=1,
speculative_num_draft_tokens=6,
trust_remote_code=True,
)
# Generate with speculative decoding
outputs = llm.generate(prompts, sampling_params)
# Print the outputs
for prompt, output in zip(prompts, outputs):
print("=" * 50)
print(f"Prompt: {prompt}")
print(f"Generated: {output['text']}")
# The __main__ condition is necessary when using spawn to create subprocesses
if __name__ == "__main__":
main()
```
### Example 2: Launch Server (Production Use)
**Step 1: Start the SGLang server with speculative decoding**
```bash
python -m sglang.launch_server \
--model-path Qwen/Qwen3-Coder-Next-FP8 \
--speculative-draft-model-path togethercomputer/Aurora-Spec-Qwen3-Coder-Next-FP8 \
--speculative-algorithm EAGLE \
--speculative-num-steps 5 \
--speculative-eagle-topk 1 \
--speculative-num-draft-tokens 6 \
--trust-remote-code \
--port 30000 \
--host 0.0.0.0
```
**Step 2: Send requests to the server**
```python
import requests
import json
# Server endpoint
url = "http://localhost:30000/v1/completions"
# Request payload
payload = {
"prompt": "Write a Python function to compute fibonacci numbers:",
"max_tokens": 256,
"temperature": 0.7,
}
# Send request
response = requests.post(url, json=payload)
result = response.json()
print(result["choices"][0]["text"])
```
Or using OpenAI-compatible client:
```python
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:30000/v1",
api_key="EMPTY"
)
response = client.completions.create(
model="Qwen/Qwen3-Coder-Next-FP8",
prompt="Write a Python function to compute fibonacci numbers:",
max_tokens=256,
temperature=0.7,
)
print(response.choices[0].text)
```
### Local Model Paths
If you have downloaded the models locally, replace the HuggingFace model paths with local paths:
```bash
python -m sglang.launch_server \
--model-path /path/to/Qwen3-Coder-Next-FP8 \
--speculative-draft-model-path /path/to/Aurora-Spec-Qwen3-Coder-Next-FP8 \
--speculative-algorithm EAGLE \
--speculative-num-steps 5 \
--speculative-eagle-topk 1 \
--speculative-num-draft-tokens 6 \
--trust-remote-code \
--port 30000
```
## Limitations
- Optimized specifically for code generation tasks
- Performance may vary on non-coding domains
- Requires compatible EAGLE3 inference framework
- Best performance achieved with Qwen/Qwen3-Coder-Next-FP8 as target model
## Citation
If you use this model, please cite:
```bibtex
@article{aurora2026,
title={When RL Meets Adaptive Speculative Training: A Unified Training-Serving System},
author={Wang, Junxiong and Bie, Fengxiang and Li, Jisen and Zhou, Zhongzhu and Shao, Zelei and Wang, Yubo and Liu, Yinghui and Wu, Qingyang and May, Avner and Yanamandra, Sri and Zhang, Yineng and Zhang, Ce and Dao, Tri and Liang, Percy and Athiwaratkun, Ben and Song, Shuaiwen Leon and Xu, Chenfeng and Wu, Xiaoxia},
journal={arXiv preprint arXiv:2602.06932},
year={2026},
url={https://arxiv.org/abs/2602.06932}
}
```
## Acknowledgments
- **Target Model**: [Qwen/Qwen3-Coder-Next-FP8](https://huggingface.co/Qwen/Qwen3-Coder-Next-FP8) by Alibaba Cloud
- **Training Framework**: Aurora - Inference-Time Training System
- **Dataset**: [OnlineSD Code Dataset](https://huggingface.co/datasets/zelc/onlinesd)
## License
Apache 2.0
|