gpt-oss-120b-DFlash

Paper | GitHub | Blog

DFlash is a novel speculative decoding method that utilizes a lightweight block diffusion model for drafting. It enables efficient, high-quality parallel drafting that pushes the limits of inference speed.

This model serves as the drafter component and contains 0.8B parameters. It must be used in conjunction with the target model openai/gpt-oss-120b.

DFlash Architecture

πŸ“Š Training Data

gpt-oss-120b-DFlash is trained on 800K samples, drawn from:

For all samples, the response portion was regenerated using the target model openai/gpt-oss-120b.

πŸš€ Quick Start

SGLang

DFlash is now supported on SGLang. And vLLM integration is currently in progress.

Installation

uv pip install "git+https://github.com/sgl-project/sglang.git@refs/pull/16818/head#subdirectory=python"

Inference

python -m sglang.launch_server \
    --model-path openai/gpt-oss-120b \
    --speculative-algorithm DFLASH \
    --speculative-draft-model-path z-lab/gpt-oss-120b-DFlash \
    --tp-size 1 \
    --dtype bfloat16 \
    --attention-backend fa3 \
    --mem-fraction-static 0.8 \
    --speculative-num-draft-tokens 10 \
    --trust-remote-code

Evaluation

The draft model is trained with a block size of 10. During evaluation, we use three settings:

  • Block size = 4 (3 draft tokens)
  • Block size = 6 (5 draft tokens)
  • Block size = 10 (9 draft tokens)

All experiments are conducted using SGLang on a single H200 GPU.

The reported speedups are end-to-end speedups, including prefill time. The pure decoding speedup is higher.

For all tasks, the reasoning effort is set to medium. Using low reasoning effort would further increase the acceptance length.

Acceptance Length

Task Block Size = 4 Block Size = 6 Block Size = 10
GSM8K 3.3 4.3 5.3
Math500 3.3 4.3 5.4
HumanEval 3.1 3.8 4.4
MBPP 3.1 3.9 4.6
MT-Bench 2.7 3.3 3.7

Speedup

GSM8K
Concurrency Block Size = 4 Block Size = 10
1 1.3Γ— 1.8Γ—
8 1.2Γ— 1.6Γ—
16 1.3Γ— 1.6Γ—
32 1.2Γ— 1.5Γ—
64 1.2Γ— 1.5Γ—
Math500
Concurrency Block Size = 4 Block Size = 10
1 1.5Γ— 1.9Γ—
8 1.4Γ— 1.7Γ—
16 1.5Γ— 1.6Γ—
32 1.4Γ— 1.5Γ—
64 1.4Γ— 1.5Γ—
HumanEval
Concurrency Block Size = 4 Block Size = 10
1 1.3Γ— 1.7Γ—
8 1.4Γ— 1.7Γ—
16 1.4Γ— 1.8Γ—
32 1.5Γ— 1.7Γ—
64 1.4Γ— 1.5Γ—
MBPP
Concurrency Block Size = 4 Block Size = 10
1 1.4Γ— 1.8Γ—
8 1.5Γ— 1.7Γ—
16 1.5Γ— 1.8Γ—
32 1.6Γ— 1.8Γ—
64 1.6Γ— 1.6Γ—
MT-Bench
Concurrency Block Size = 4 Block Size = 10
1 1.3Γ— 1.3Γ—
8 1.2Γ— 1.3Γ—
16 1.3Γ— 1.3Γ—
32 1.4Γ— 1.3Γ—
64 1.3Γ— 1.2Γ—

Acknowledgement

We are grateful to Yotta Labs for their compute support in training this draft model.

Citation

If you find DFlash useful for your research or applications, please cite our project.

@misc{chen2026dflash,
  title         = {DFlash: Block Diffusion for Flash Speculative Decoding},
  author        = {Chen, Jian and Liang, Yesheng and Liu, Zhijian},
  year          = {2026},
  eprint        = {2602.06036},
  archivePrefix = {arXiv},
  primaryClass  = {cs.CL},
  url           = {https://arxiv.org/abs/2602.06036}
}
Downloads last month
752
Safetensors
Model size
0.8B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Collection including z-lab/gpt-oss-120b-DFlash

Paper for z-lab/gpt-oss-120b-DFlash