astrlrd's picture
Update README.md
998a1f2 verified
metadata
pipeline_tag: text-generation
library_name: transformers
tags:
  - speculative-decoding
  - draft-model
  - mtp
  - deepseek
  - inference-acceleration
base_model: deepseek-ai/DeepSeek-V3-0324
license: cc-by-4.0
model-index:
  - name: nebius/MTP-DeepSeek-V3-0324
    results:
      - task:
          type: text-generation
        dataset:
          name: MT-Bench
          type: MT-Bench
        metrics:
          - name: Acceptance Length
            type: Acceptance Length
            value: 3.88
      - task:
          type: text-generation
        dataset:
          name: GSM8K
          type: GSM8K
        metrics:
          - name: Acceptance Length
            type: Acceptance Length
            value: 5.51
      - task:
          type: text-generation
        dataset:
          name: HumanEval
          type: HumanEval
        metrics:
          - name: Acceptance Length
            type: Acceptance Length
            value: 4.64
datasets:
  - nebius/DeepSeek-V3-Infinity-Instruct-0625

Model Description

This model is a fine-tuned version of DeepSeek-V3's native MTP module, optimized for speculative decoding using LK losses — training objectives that directly target acceptance rate rather than using KL divergence as a proxy.

The original DeepSeek-V3 MTP module was trained primarily for first-token prediction and reused autoregressively for later positions, causing degraded acceptance rates at later draft positions. Our fine-tuning addresses this mismatch, substantially improving multi-token speculation performance.

Training Details

  • Base model: deepseek-ai/DeepSeek-V3-0324
  • Draft architecture: MTP
  • Training data: Infinity-Instruct-0625 with DeepSeek-V3 generated responses
  • Training objective: Hybrid LK loss with adaptive λ scheduling (η=3)
  • Training: 1 epoch from pretrained MTP weights
  • Draft length: K = 6 speculative tokens

Performance

Average acceptance length (τ) measured across MT-bench, HumanEval, and GSM8K with K = 7:

Configuration Temperature = 0 Temperature = 1
Original MTP 3.20 3.09
MTP + KL fine-tuning 4.79 4.43
MTP + LK fine-tuning (ours) 4.83 4.68

Usage with vLLM

from vllm import LLM, SamplingParams

llm = LLM(
    model="deepseek-ai/DeepSeek-V3-0324",
    speculative_config={
        "method": "deepseek_mtp",
        "model": "nebius/MTP-DeepSeek-V3-0324",
        "num_speculative_tokens": 6,
    },
    tensor_parallel_size=8,
    max_num_seqs=1
)

sampling_params = SamplingParams(temperature=0.7)
outputs = llm.generate(["Explain speculative decoding in simple terms."], sampling_params)

Note: The current vLLM implementation samples draft tokens greedily regardless of temperature settings, which can underestimate acceptance rates at temperature > 0. A community fix is under development (see vllm-project/vllm#20459). The acceptance metrics reported above were measured with proper rejection sampling.

License

CC BY 4.0

Citation

@misc{samarin2026lklosses,
  title     = {LK Losses: Direct Acceptance Rate Optimization for Speculative Decoding},
  author    = {Alexander Samarin and Sergei Krutikov and Anton Shevtsov and Sergei Skvortsov and Filipp Fisin and Alexander Golubev},
  year      = {2026},
  eprint    = {2602.23881},
  archivePrefix = {arXiv},
  primaryClass  = {cs.LG},
  url       = {https://arxiv.org/abs/2602.23881}
}