Model Description

This is an EAGLE-3 draft model for Llama-3.3-70B-Instruct, trained from scratch using LK losses — training objectives that directly target acceptance rate rather than using KL divergence as a proxy.

Training Details

  • Base model: meta-llama/Llama-3.3-70B-Instruct
  • Draft architecture: EAGLE-3
  • Training data: Infinity-Instruct-0625 with Llama-3.3-70B generated responses
  • Training objective: Hybrid LK loss with adaptive λ scheduling (η=3)
  • Training: 10 epochs from random initialization
  • Draft length: K = 6 speculative tokens

Performance

Average acceptance length (Ï„) measured across MT-bench, HumanEval, and GSM8K with K = 7:

Configuration Temperature = 0 Temperature = 1
EAGLE-3 + KL 4.78 4.50
EAGLE-3 + LK (ours) 4.81 4.66

Comparison with Public Checkpoints

Model MT-bench (Ï„) HumanEval (Ï„) GSM8K (Ï„)
yuhuili/EAGLE3-LLaMA3.1-Instruct-8B 2.77 3.49 3.34
RedHatAI/Llama-3.3-70B-Instruct-speculator.eagle3 2.88 3.62 3.29
Ours 3.89 5.08 5.01

Measured at temperature = 1 with K = 7

Usage with vLLM

from vllm import LLM, SamplingParams

llm = LLM(
    model="meta-llama/Llama-3.3-70B-Instruct",
    speculative_config={
        "method": "eagle3",
        "model": "nebius/EAGLE3-Llama-3.3-70B-Instruct",
        "num_speculative_tokens": 6,
    },
)

sampling_params = SamplingParams(temperature=0.7)
outputs = llm.generate(["Explain speculative decoding in simple terms."], sampling_params)

Note: The current vLLM implementation samples draft tokens greedily regardless of temperature settings, which can underestimate acceptance rates at temperature > 0. A community fix is under development (see vllm-project/vllm#20459). The acceptance metrics reported above were measured with proper rejection sampling.

License

CC BY 4.0

This model was trained using outputs from meta-llama/Llama-3.3-70B-Instruct. Use of this model is additionally subject to the Llama 3.1 Community License Agreement.

Llama 3.1 is licensed under the Llama 3.1 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.

Citation

@misc{samarin2026lklosses,
  title     = {LK Losses: Direct Acceptance Rate Optimization for Speculative Decoding},
  author    = {Alexander Samarin and Sergei Krutikov and Anton Shevtsov and Sergei Skvortsov and Filipp Fisin and Alexander Golubev},
  year      = {2026},
  eprint    = {2602.23881},
  archivePrefix = {arXiv},
  primaryClass  = {cs.LG},
  url       = {https://arxiv.org/abs/2602.23881}
}
Downloads last month
2
Safetensors
Model size
1B params
Tensor type
I32
·
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nebius/EAGLE3-Llama-3.3-70B-Instruct

Finetuned
(455)
this model

Collection including nebius/EAGLE3-Llama-3.3-70B-Instruct

Paper for nebius/EAGLE3-Llama-3.3-70B-Instruct

Evaluation results