d3LLM-model commited on
Commit
54d58e5
·
verified ·
1 Parent(s): 9a1aad1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -7,4 +7,17 @@ sdk: static
7
  pinned: false
8
  ---
9
 
10
- We introduce a novel recipe for building an ultra-fast diffusion language model named [d3LLM](https://github.com/hao-ai-lab/d3llm) (pseuDo-Distilled Diffusion LLM) 🚀, a new metric [AUP (Accuracy Under Parallelism)](https://hao-ai-lab.github.io/blogs/text-diffusion/) that captures both accuracy and parallelism📊, along with a [Leaderboard](https://huggingface.co/spaces/d3LLM/dLLM_Leaderboard) 🏆 for various diffusion LLMs.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  pinned: false
8
  ---
9
 
10
+ We introduce a novel recipe for building an ultra-fast diffusion language model named [d3LLM](https://github.com/hao-ai-lab/d3llm) (pseuDo-Distilled Diffusion LLM) 🚀, a new metric [AUP (Accuracy Under Parallelism)](https://hao-ai-lab.github.io/blogs/text-diffusion/) that captures both accuracy and parallelism📊, along with a [Leaderboard](https://huggingface.co/spaces/d3LLM/dLLM_Leaderboard) 🏆 for various diffusion LLMs.
11
+
12
+
13
+ If you find our d3LLM or the AUP metric useful for your research, please ⭐️ star [our project](https://github.com/hao-ai-lab/d3LLM) and cite our work:
14
+
15
+ ```bibtex
16
+ @article{arxiv'26:d3llm,
17
+ title = {d3LLM: Ultra-Fast Diffusion LLM using Pseudo-Trajectory Distillation},
18
+ author = {Yu-Yang Qian and Junda Su and Lanxiang Hu and Peiyuan Zhang and Zhijie Deng and Peng Zhao and Hao Zhang},
19
+ journal = {ArXiv preprint},
20
+ volume = {arXiv:2601.07568},
21
+ year = {2026}
22
+ }
23
+ ```