Improve model card: add paper link, citation, license, and library_name
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,30 +1,59 @@
|
|
| 1 |
---
|
| 2 |
datasets:
|
| 3 |
- d3LLM/trajectory_data_llada_32
|
|
|
|
| 4 |
tags:
|
| 5 |
- diffusion
|
| 6 |
- text-generation
|
| 7 |
- fast-inference
|
| 8 |
- d3llm
|
| 9 |
-
|
|
|
|
|
|
|
| 10 |
---
|
| 11 |
|
| 12 |
-
|
| 13 |
# d3LLM: Ultra-Fast Diffusion LLM using Pseudo-Trajectory Distillation π
|
| 14 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
## Model Description
|
| 16 |
|
| 17 |
-
**d3LLM-LLaDA** is an ultra-fast diffusion language model that
|
| 18 |
|
| 19 |
## Key Features
|
| 20 |
|
| 21 |
-
- π High throughput
|
| 22 |
-
- π High AUP
|
| 23 |
-
- π§
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 24 |
|
| 25 |
-
##
|
| 26 |
|
| 27 |
-
|
| 28 |
|
| 29 |
-
|
| 30 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
datasets:
|
| 3 |
- d3LLM/trajectory_data_llada_32
|
| 4 |
+
pipeline_tag: text-generation
|
| 5 |
tags:
|
| 6 |
- diffusion
|
| 7 |
- text-generation
|
| 8 |
- fast-inference
|
| 9 |
- d3llm
|
| 10 |
+
license: apache-2.0
|
| 11 |
+
library_name: transformers
|
| 12 |
+
base_model: GSAI-ML/LLaDA-8B-Instruct
|
| 13 |
---
|
| 14 |
|
|
|
|
| 15 |
# d3LLM: Ultra-Fast Diffusion LLM using Pseudo-Trajectory Distillation π
|
| 16 |
|
| 17 |
+
This repository contains **d3LLM-LLaDA**, an ultra-fast diffusion language model presented in the paper [d3LLM: Ultra-Fast Diffusion LLM using Pseudo-Trajectory Distillation](https://huggingface.co/papers/2601.07568).
|
| 18 |
+
|
| 19 |
+
- π **Paper:** [arXiv:2601.07568](https://huggingface.co/papers/2601.07568)
|
| 20 |
+
- π» **Code:** [GitHub - hao-ai-lab/d3LLM](https://github.com/hao-ai-lab/d3LLM)
|
| 21 |
+
- π **Blog:** [Ultra-Fast Diffusion LLMs](https://hao-ai-lab.github.io/blogs/text-diffusion/)
|
| 22 |
+
- πΉοΈ **Demo:** [d3LLM Demo](https://d3llm-team.github.io/)
|
| 23 |
+
|
| 24 |
## Model Description
|
| 25 |
|
| 26 |
+
**d3LLM-LLaDA** is an ultra-fast diffusion language model that strikes a balance between accuracy and parallelism. It uses pseudo-trajectory distillation to teach the model which tokens can be decoded confidently at early steps, and employs an entropy-based multi-block decoding mechanism with KV-cache refresh during inference.
|
| 27 |
|
| 28 |
## Key Features
|
| 29 |
|
| 30 |
+
- π **High throughput:** 5.0Γ faster than autoregressive models (Qwen-2.5-7B-it) on H100 GPU and 3.5Γ faster on A100 GPU.
|
| 31 |
+
- π **High AUP:** Achieves high Accuracy Under Parallelism scores across benchmarks.
|
| 32 |
+
- π§ **Task Optimization:** Specifically optimized for coding and math reasoning tasks.
|
| 33 |
+
|
| 34 |
+
## Installation
|
| 35 |
+
|
| 36 |
+
To use this model, it is recommended to clone the official repository and install the required dependencies:
|
| 37 |
+
|
| 38 |
+
```bash
|
| 39 |
+
# Clone the repository
|
| 40 |
+
git clone https://github.com/hao-ai-lab/d3LLM.git
|
| 41 |
+
cd d3LLM
|
| 42 |
+
|
| 43 |
+
# Install dependencies
|
| 44 |
+
pip install -r requirements.txt
|
| 45 |
+
```
|
| 46 |
|
| 47 |
+
## Citation
|
| 48 |
|
| 49 |
+
If you find d3LLM useful for your research, please cite the following work:
|
| 50 |
|
| 51 |
+
```bibtex
|
| 52 |
+
@article{arxiv'26:d3llm,
|
| 53 |
+
title = {d3LLM: Ultra-Fast Diffusion LLM using Pseudo-Trajectory Distillation},
|
| 54 |
+
author = {Yu-Yang Qian and Junda Su and Lanxiang Hu and Peiyuan Zhang and Zhijie Deng and Peng Zhao and Hao Zhang},
|
| 55 |
+
journal = {ArXiv preprint},
|
| 56 |
+
volume = {arXiv:2601.07568},
|
| 57 |
+
year = {2026}
|
| 58 |
+
}
|
| 59 |
+
```
|