LK-Speculators
Collection
High-performance speculative decoding draft models trained using LK losses, a novel training objectives that directly optimize acceptance rate • 8 items • Updated
• 4
This is an EAGLE-3 draft-model for gpt-oss-120b, trained from scratch using LK losses — training objectives that directly target acceptance rate rather than using KL divergence as a proxy.
Average acceptance length (τ) measured across MT-bench, HumanEval, and GSM8K with K = 7:
| Configuration | Temperature = 0 | Temperature = 1 |
|---|---|---|
| EAGLE-3 + KL | 2.76 | 2.46 |
| EAGLE-3 + LK (ours) | 2.81 | 2.65 |
from vllm import LLM, SamplingParams
llm = LLM(
model="openai/gpt-oss-120b",
speculative_config={
"method": "eagle3",
"model": "nebius/EAGLE3-gpt-oss-120b",
"num_speculative_tokens": 6,
},
)
sampling_params = SamplingParams(temperature=0.7)
outputs = llm.generate(["Explain speculative decoding in simple terms."], sampling_params)
Note: The current vLLM implementation samples draft tokens greedily regardless of temperature settings, which can underestimate acceptance rates at temperature > 0. A community fix is under development (see vllm-project/vllm#20459). The acceptance metrics reported above were measured with proper rejection sampling.
@misc{samarin2026lklosses,
title = {LK Losses: Direct Acceptance Rate Optimization for Speculative Decoding},
author = {Alexander Samarin and Sergei Krutikov and Anton Shevtsov and Sergei Skvortsov and Filipp Fisin and Alexander Golubev},
year = {2026},
eprint = {2602.23881},
archivePrefix = {arXiv},
primaryClass = {cs.LG},
url = {https://arxiv.org/abs/2602.23881}
}