gpt-oss-20b-DFlash
DFlash is a novel speculative decoding method that utilizes a lightweight block diffusion model for drafting. It enables efficient, high-quality parallel drafting that pushes the limits of inference speed.
This model serves as the drafter component and contains 0.8B parameters. It must be used in conjunction with the target model openai/gpt-oss-20b.
π Training Data
gpt-oss-20b-DFlash is trained on 800K samples, drawn from:
For all samples, the response portion was regenerated using the target model openai/gpt-oss-20b.
π Quick Start
SGLang
DFlash is now supported on SGLang. And vLLM integration is currently in progress.
Installation
uv pip install "git+https://github.com/sgl-project/sglang.git@refs/pull/16818/head#subdirectory=python"
Inference
python -m sglang.launch_server \
--model-path openai/gpt-oss-20b \
--speculative-algorithm DFLASH \
--speculative-draft-model-path z-lab/gpt-oss-20b-DFlash \
--tp-size 1 \
--dtype bfloat16 \
--attention-backend fa3 \
--mem-fraction-static 0.75 \
--trust-remote-code
Transformers
Installation
pip install transformers==4.57.3 torch==2.9.1 accelerate
Inference
from transformers import AutoModel, AutoModelForCausalLM, AutoTokenizer
model = AutoModel.from_pretrained(
"z-lab/gpt-oss-20b-DFlash",
trust_remote_code=True,
dtype="auto",
device_map="cuda:0"
).eval()
target = AutoModelForCausalLM.from_pretrained(
"openai/gpt-oss-20b",
dtype="auto",
device_map="cuda:0"
).eval()
tokenizer = AutoTokenizer.from_pretrained("openai/gpt-oss-20b")
prompt = "How many positive whole-number divisors does 196 have?"
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
reasoning_effort="medium"
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generate_ids = model.spec_generate(
input_ids=model_inputs["input_ids"],
max_new_tokens=2048,
temperature=0.0,
target=target,
stop_token_ids=[tokenizer.eos_token_id]
)
print(tokenizer.decode(generate_ids[0], skip_special_tokens=False))
Evaluation
We use a block size of 8 (7 draft tokens) during speculation. DFlash consistently achieves high acceptance lengths and speedups across different concurrency levels. All experiments are conducted using SGLang on a single H200 GPU.
The numbers reported are end-to-end speedup (including prefill time). You can specify different block size during inference by passing --speculative-num-draft-tokens arguments when launch the server.
The reasoning effort is set to medium for all tasks. Low reasoning effort will give even higher acceptance length.
| Math500 | GSM8K | HumanEval | MT-Bench | |
|---|---|---|---|---|
| Accept Len | 5.1 | 4.7 | 4.3 | 4.2 |
| conc=1 | 2.2Γ | 2.0Γ | 2.0Γ | 1.9Γ |
| conc=4 | 2.1Γ | 2.0Γ | 2.1Γ | 2.0Γ |
| conc=8 | 2.2Γ | 2.0Γ | 2.2Γ | 2.0Γ |
| conc=16 | 1.9Γ | 1.8Γ | 2.1Γ | 1.9Γ |
| conc=32 | 1.8Γ | 1.7Γ | 1.9Γ | 1.7Γ |
Acknowledgement
We are grateful to Yotta Labs for their compute support in training this draft model.
Citation
If you find DFlash useful for your research or applications, please cite our project.
@misc{chen2026dflash,
title = {DFlash: Block Diffusion for Flash Speculative Decoding},
author = {Chen, Jian and Liang, Yesheng and Liu, Zhijian},
year = {2026},
eprint = {2602.06036},
archivePrefix = {arXiv},
primaryClass = {cs.CL},
url = {https://arxiv.org/abs/2602.06036}
}
- Downloads last month
- 389