Aurora-Spec-Minimax-M2.1
Model Description
This is an EAGLE3 draft model trained from scratch (random initialization) using the Aurora inference-time training framework for speculative decoding. Unlike traditional approaches that fine-tune pre-trained models, this model is built entirely through Aurora's online training process. The model is optimized to generate high-quality draft tokens for the MiniMax M2.1 target model, achieving significant speedups across various batch sizes.
Key Features
- Training Approach: Trained from scratch (random initialization) - no pre-training required
- Framework: Trained with Aurora - an advanced inference-time training system
- Architecture: EAGLE3 speculative decoding draft model
- Target Model: MiniMax M2.1
- Performance: Achieves 2.62 average accept length with lookahead 4 (recommended configuration)
- Training: 44,000 inference requests on NVIDIA H200 GPU
- Speedup: Up to 1.58× speedup at batch size 1 (lookahead 3), 1.57× with lookahead 4 (recommended)
Target Model
This draft model is specifically designed to work with:
- Model: MiniMax M2.1
- Type: General-purpose language model
- Domain: Broad language understanding and generation
The draft model learns to predict the target model's token distribution during inference-time training, enabling efficient speculative decoding.
Architecture
EAGLE3 Speculative Decoding
This model implements the EAGLE3 (Extrapolation Algorithm for Greater Language-model Efficiency) architecture:
- Draft Model: Lightweight model that generates candidate tokens
- Tree-based Attention: Enables parallel verification of multiple draft tokens
- Auto-regressive Generation: Produces speculative token sequences
- Dynamic Adaptation: Updates during inference to match target model distribution
Model Structure
- Initialization: Trained from scratch (random initialization, no pre-training)
- Base Architecture: Single-layer Transformer decoder
- Recommended Configuration: Lookahead 4 (speculative_num_steps=4)
- Attention Mechanism: Tree-based for parallel draft verification
- Training Paradigm: Online learning during inference (Aurora framework)
Training Details
Aurora Framework
This model was trained from scratch using Aurora, an inference-time training framework that:
- No Pre-training Required: Starts from random initialization and learns entirely through online training
- Updates the draft model dynamically during inference
- Uses reverse KL divergence for distribution matching (minimizing KL(target || draft))
- Employs online learning with periodic model updates
- Optimizes for both draft quality and speculative acceptance rate
- Demonstrates that effective draft models can be built from scratch without expensive pre-training
Training Configuration
- Hardware: NVIDIA H200 GPU
- Training Requests: 44,000 inference requests
- Synchronization Interval: Every 800 requests
- Recommended Configuration: Lookahead 4
- KL Divergence: Reverse KL divergence (draft → target)
Dataset
Trained on diverse prompts suitable for general-purpose language modeling and speculative decoding.
Benchmarks
End-to-End Throughput Performance
Measured on a holdout evaluation dataset using the final Aurora checkpoint.
MiniMax M2.1: end-to-end throughput under varying batch size and lookahead
We report tokens-per-second (TPS) statistics and speedup relative to the no-speculation baseline.
| BS | Config | Mean TPS | P50 TPS | P05 TPS | P95 TPS | Count | Speedup | Acc Len |
|---|---|---|---|---|---|---|---|---|
| 1 | w/o spec | 134.9 | 136.4 | 130.6 | 136.9 | 257 | -- | -- |
| lookahead 3 | 213.0 | 213.7 | 169.8 | 256.3 | 257 | 1.58× | 2.42 | |
| lookahead 4 | 211.8 | 210.6 | 163.1 | 270.3 | 257 | 1.57× | 2.62 | |
| 8 | w/o spec | 79.0 | 78.7 | 73.7 | 85.1 | 257 | -- | -- |
| lookahead 3 | 106.5 | 105.2 | 84.0 | 134.8 | 257 | 1.35× | 2.43 | |
| lookahead 4 | 107.1 | 104.5 | 79.9 | 137.1 | 257 | 1.36× | 2.62 | |
| lookahead 5 | 106.6 | 104.8 | 79.3 | 140.9 | 257 | 1.35× | 2.70 | |
| 16 | w/o spec | 64.5 | 63.7 | 58.9 | 72.3 | 257 | -- | -- |
| lookahead 3 | 83.2 | 81.4 | 62.2 | 110.3 | 257 | 1.29× | 2.43 | |
| lookahead 4 | 83.1 | 82.9 | 60.9 | 112.0 | 257 | 1.29× | 2.62 | |
| lookahead 5 | 82.6 | 81.0 | 58.1 | 116.1 | 257 | 1.28× | 2.69 | |
| 32 | w/o spec | 53.5 | 52.9 | 47.1 | 67.1 | 257 | -- | -- |
| lookahead 3 | 67.1 | 64.9 | 45.2 | 97.8 | 257 | 1.25× | 2.44 | |
| lookahead 4 | 67.1 | 64.7 | 44.0 | 100.5 | 257 | 1.25× | 2.62 | |
| lookahead 5 | 67.3 | 64.9 | 45.2 | 99.7 | 257 | 1.26× | 2.71 |
Performance Across Different Batch Sizes
Aurora provides consistent speedups across all batch sizes for MiniMax M2.1, demonstrating the effectiveness of speculative decoding across diverse deployment scenarios:
Batch Size 1 (Best Case): Up to 1.58× speedup with lookahead 3 configuration. The recommended lookahead 4 achieves 1.57× speedup with 2.62 average accept length. At low batch sizes, the cost of draft generation and verification is well amortized by reduced target model forward passes, providing the largest gains for latency-critical scenarios.
Batch Size 8 (Strong): 1.36× speedup with lookahead 4 configuration (2.62 average accept length). Speculative decoding continues to provide substantial throughput improvements for moderate batching scenarios.
Batch Size 16 (Moderate): 1.29× speedup with lookahead 4 configuration (2.62 average accept length). Benefits remain significant as the verification overhead is effectively managed.
Batch Size 32 (Consistent): 1.25-1.26× speedup with lookahead 4-5 configurations. Unlike some models, MiniMax M2.1 maintains positive speedups even at large batch sizes, demonstrating robust performance across the batching spectrum.
Metrics Explained:
- TPS: Tokens per second (throughput)
- Acc Len: Average accept length (number of draft tokens accepted per verification step)
- Speedup: Relative to the no-speculation baseline
- P05/P95: 5th and 95th percentile throughput values
- Count: Number of evaluation samples
Notably, this performance is achieved with a model trained from scratch - it learns entirely through Aurora's online training process over 44,000 requests, demonstrating the effectiveness of inference-time training without expensive pre-training.
Usage
This model is designed to be used as a draft model in EAGLE3 speculative decoding pipelines with MiniMax M2.1 as the target model.
Example 1: Python API (Offline Batch Inference)
import sglang as sgl
def main():
# Sample prompts
prompts = [
"Explain the concept of quantum computing:",
"Write a short story about a time traveler:",
"Describe the process of photosynthesis:",
]
# Create sampling params
sampling_params = {"temperature": 0.7, "max_new_tokens": 256}
# Initialize engine with speculative decoding (lookahead 4 - recommended)
llm = sgl.Engine(
model_path="MiniMax/M2.1",
speculative_draft_model_path="togethercomputer/Aurora-Spec-Minimax-M2.1",
speculative_algorithm="EAGLE",
speculative_num_steps=4, # Recommended: lookahead 4
speculative_eagle_topk=1,
speculative_num_draft_tokens=6,
dtype="bfloat16",
trust_remote_code=True,
)
# Generate with speculative decoding
outputs = llm.generate(prompts, sampling_params)
# Print the outputs
for prompt, output in zip(prompts, outputs):
print("=" * 50)
print(f"Prompt: {prompt}")
print(f"Generated: {output['text']}")
# The __main__ condition is necessary when using spawn to create subprocesses
if __name__ == "__main__":
main()
Example 2: Launch Server (Production Use)
Step 1: Start the SGLang server with speculative decoding
python -m sglang.launch_server \
--model-path MiniMax/M2.1 \
--speculative-draft-model-path togethercomputer/Aurora-Spec-Minimax-M2.1 \
--speculative-algorithm EAGLE \
--speculative-num-steps 4 \
--speculative-eagle-topk 1 \
--speculative-num-draft-tokens 6 \
--dtype bfloat16 \
--trust-remote-code \
--port 30000 \
--host 0.0.0.0
Step 2: Send requests to the server
import requests
import json
# Server endpoint
url = "http://localhost:30000/v1/completions"
# Request payload
payload = {
"prompt": "Explain the concept of quantum computing:",
"max_tokens": 256,
"temperature": 0.7,
}
# Send request
response = requests.post(url, json=payload)
result = response.json()
print(result["choices"][0]["text"])
Or using OpenAI-compatible client:
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:30000/v1",
api_key="EMPTY"
)
response = client.completions.create(
model="MiniMax/M2.1",
prompt="Explain the concept of quantum computing:",
max_tokens=256,
temperature=0.7,
)
print(response.choices[0].text)
Local Model Paths
If you have downloaded the models locally, replace the HuggingFace model paths with local paths:
python -m sglang.launch_server \
--model-path /path/to/MiniMax-M2.1 \
--speculative-draft-model-path /path/to/Aurora-Spec-Minimax-M2.1 \
--speculative-algorithm EAGLE \
--speculative-num-steps 4 \
--speculative-eagle-topk 1 \
--speculative-num-draft-tokens 6 \
--dtype bfloat16 \
--trust-remote-code \
--port 30000
Limitations
- Optimized specifically for MiniMax M2.1 target model
- Performance may vary with different target models
- Requires compatible EAGLE3 inference framework
- Best performance achieved with MiniMax M2.1 as target model
Citation
If you use this model, please cite:
@article{aurora2026,
title={When RL Meets Adaptive Speculative Training: A Unified Training-Serving System},
author={Wang, Junxiong and Bie, Fengxiang and Li, Jisen and Zhou, Zhongzhu and Shao, Zelei and Wang, Yubo and Liu, Yinghui and Wu, Qingyang and May, Avner and Yanamandra, Sri and Zhang, Yineng and Zhang, Ce and Dao, Tri and Liang, Percy and Athiwaratkun, Ben and Song, Shuaiwen Leon and Xu, Chenfeng and Wu, Xiaoxia},
journal={arXiv preprint arXiv:2602.06932},
year={2026},
url={https://arxiv.org/abs/2602.06932}
}
Acknowledgments
- Target Model: MiniMax M2.1
- Training Framework: Aurora - Inference-Time Training System
- Hardware: NVIDIA H200 GPU
License
Apache 2.0
- Downloads last month
- 63