File size: 2,607 Bytes
d3d380d 1fd6f8c cd3334e d3d380d 90cfadd c651d67 90cfadd cd3334e 90cfadd cd3334e 90cfadd cd3334e 90cfadd cd3334e 90cfadd cd3334e | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 | ---
datasets:
- d3LLM/trajectory_data_dream_32
pipeline_tag: text-generation
library_name: transformers
license: apache-2.0
base_model: Dream-org/Dream-v0-Instruct-7B
tags:
- diffusion
- text-generation
- fast-inference
- d3llm
---
# d3LLM: Ultra-Fast Diffusion LLM using Pseudo-Trajectory Distillation π
This repository contains the **d3LLM-Dream** model, an ultra-fast diffusion language model introduced in the paper [d3LLM: Ultra-Fast Diffusion LLM using Pseudo-Trajectory Distillation](https://huggingface.co/papers/2601.07568).
- π **Paper**: [arXiv:2601.07568](https://huggingface.co/papers/2601.07568)
- π **Code repo**: [https://github.com/hao-ai-lab/d3LLM](https://github.com/hao-ai-lab/d3LLM)
- π **Blog**: [https://hao-ai-lab.github.io/blogs/text-diffusion/](https://hao-ai-lab.github.io/blogs/text-diffusion/)
- πΉοΈ **Demo**: [https://d3llm-team.github.io/](https://d3llm-team.github.io/)
## Model Description
**d3LLM-Dream** is an ultra-fast diffusion language model that achieves high generation speed while maintaining competitive performance. It strikes a balance between accuracy and parallelism by using **pseudo-trajectory distillation** during training and **entropy-based multi-block decoding** during inference.
## Key Features
- π **High throughput**: 4.5Γ faster than autoregressive models (Qwen-2.5-7B) on H100 GPU, 2.5Γ faster on A100 GPU. Achieves **235.34 tokens/s** on H100 on GSM8K-CoT.
- π **High AUP**: Optimized for Accuracy Under Parallelism across benchmarks.
- π§ **Specialized**: Optimized for coding and math reasoning tasks.
## Usage
You can load and use the model with the π€ Transformers library. Note that `trust_remote_code=True` is required as the model uses a custom architecture.
```python
from transformers import AutoModel, AutoTokenizer
model_id = "d3LLM/d3LLM_Dream"
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModel.from_pretrained(model_id, trust_remote_code=True)
# For detailed inference scripts (multi-block decoding),
# please refer to the official GitHub repository.
```
For more comprehensive examples and evaluation scripts, visit the [official repository](https://github.com/hao-ai-lab/d3LLM).
## Citation
```bibtex
@article{arxiv'26:d3llm,
title = {d3LLM: Ultra-Fast Diffusion LLM using Pseudo-Trajectory Distillation},
author = {Yu-Yang Qian and Junda Su and Lanxiang Hu and Peiyuan Zhang and Zhijie Deng and Peng Zhao and Hao Zhang},
journal = {ArXiv preprint},
volume = {arXiv:2601.07568},
year = {2026}
}
``` |