d3LLM_LLaDA / README.md
nielsr's picture
nielsr HF Staff
Improve model card: add paper link, citation, license, and library_name
2dc323f verified
|
raw
history blame
2.21 kB
---
datasets:
- d3LLM/trajectory_data_llada_32
pipeline_tag: text-generation
tags:
- diffusion
- text-generation
- fast-inference
- d3llm
license: apache-2.0
library_name: transformers
base_model: GSAI-ML/LLaDA-8B-Instruct
---
# d3LLM: Ultra-Fast Diffusion LLM using Pseudo-Trajectory Distillation πŸš€
This repository contains **d3LLM-LLaDA**, an ultra-fast diffusion language model presented in the paper [d3LLM: Ultra-Fast Diffusion LLM using Pseudo-Trajectory Distillation](https://huggingface.co/papers/2601.07568).
- πŸ“„ **Paper:** [arXiv:2601.07568](https://huggingface.co/papers/2601.07568)
- πŸ’» **Code:** [GitHub - hao-ai-lab/d3LLM](https://github.com/hao-ai-lab/d3LLM)
- 🌐 **Blog:** [Ultra-Fast Diffusion LLMs](https://hao-ai-lab.github.io/blogs/text-diffusion/)
- πŸ•ΉοΈ **Demo:** [d3LLM Demo](https://d3llm-team.github.io/)
## Model Description
**d3LLM-LLaDA** is an ultra-fast diffusion language model that strikes a balance between accuracy and parallelism. It uses pseudo-trajectory distillation to teach the model which tokens can be decoded confidently at early steps, and employs an entropy-based multi-block decoding mechanism with KV-cache refresh during inference.
## Key Features
- πŸš€ **High throughput:** 5.0Γ— faster than autoregressive models (Qwen-2.5-7B-it) on H100 GPU and 3.5Γ— faster on A100 GPU.
- πŸ“Š **High AUP:** Achieves high Accuracy Under Parallelism scores across benchmarks.
- πŸ”§ **Task Optimization:** Specifically optimized for coding and math reasoning tasks.
## Installation
To use this model, it is recommended to clone the official repository and install the required dependencies:
```bash
# Clone the repository
git clone https://github.com/hao-ai-lab/d3LLM.git
cd d3LLM
# Install dependencies
pip install -r requirements.txt
```
## Citation
If you find d3LLM useful for your research, please cite the following work:
```bibtex
@article{arxiv'26:d3llm,
title = {d3LLM: Ultra-Fast Diffusion LLM using Pseudo-Trajectory Distillation},
author = {Yu-Yang Qian and Junda Su and Lanxiang Hu and Peiyuan Zhang and Zhijie Deng and Peng Zhao and Hao Zhang},
journal = {ArXiv preprint},
volume = {arXiv:2601.07568},
year = {2026}
}
```