Model Description
This model is a fine-tuned version of DeepSeek-V3's native MTP module, optimized for speculative decoding using LK losses — training objectives that directly target acceptance rate rather than using KL divergence as a proxy.
The original DeepSeek-V3 MTP module was trained primarily for first-token prediction and reused autoregressively for later positions, causing degraded acceptance rates at later draft positions. Our fine-tuning addresses this mismatch, substantially improving multi-token speculation performance.
Training Details
- Base model: deepseek-ai/DeepSeek-V3-0324
- Draft architecture: MTP
- Training data: Infinity-Instruct-0625 with DeepSeek-V3 generated responses
- Training objective: Hybrid LK loss with adaptive λ scheduling (η=3)
- Training: 1 epoch from pretrained MTP weights
- Draft length: K = 6 speculative tokens
Performance
Average acceptance length (τ) measured across MT-bench, HumanEval, and GSM8K with K = 7:
| Configuration | Temperature = 0 | Temperature = 1 |
|---|---|---|
| Original MTP | 3.20 | 3.09 |
| MTP + KL fine-tuning | 4.79 | 4.43 |
| MTP + LK fine-tuning (ours) | 4.83 | 4.68 |
Usage with vLLM
from vllm import LLM, SamplingParams
llm = LLM(
model="deepseek-ai/DeepSeek-V3-0324",
speculative_config={
"method": "deepseek_mtp",
"model": "nebius/MTP-DeepSeek-V3-0324",
"num_speculative_tokens": 6,
},
tensor_parallel_size=8,
max_num_seqs=1
)
sampling_params = SamplingParams(temperature=0.7)
outputs = llm.generate(["Explain speculative decoding in simple terms."], sampling_params)
Note: The current vLLM implementation samples draft tokens greedily regardless of temperature settings, which can underestimate acceptance rates at temperature > 0. A community fix is under development (see vllm-project/vllm#20459). The acceptance metrics reported above were measured with proper rejection sampling.
License
Citation
@misc{samarin2026lklosses,
title = {LK Losses: Direct Acceptance Rate Optimization for Speculative Decoding},
author = {Alexander Samarin and Sergei Krutikov and Anton Shevtsov and Sergei Skvortsov and Filipp Fisin and Alexander Golubev},
year = {2026},
eprint = {2602.23881},
archivePrefix = {arXiv},
primaryClass = {cs.LG},
url = {https://arxiv.org/abs/2602.23881}
}
- Downloads last month
- 29
Model tree for nebius/MTP-DeepSeek-V3-0324
Base model
deepseek-ai/DeepSeek-V3-0324Collection including nebius/MTP-DeepSeek-V3-0324
Paper for nebius/MTP-DeepSeek-V3-0324
Evaluation results
- Acceptance Length on MT-Benchself-reported3.880
- Acceptance Length on GSM8Kself-reported5.510
- Acceptance Length on HumanEvalself-reported4.640