File size: 1,007 Bytes
9463242 6992822 d350259 6992822 9463242 e7c5f0a 9463242 6890ee5 e7c5f0a 6890ee5 9463242 6992822 9463242 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
---
license: mit
---
# Introduction to TraDo
[Paper](https://arxiv.org/abs/2509.06949) | [Code](https://github.com/Gen-Verse/dLLM-RL) | [Blog](https://yinjjiew.github.io/projects/dllmrl/)
We introduce **TraDo**, SOTA diffusion language model, trained with **TraceRL**.
* **TraDo-4B-Instruct** and **TraDo-8B-Instruct** outperform similarly sized strong AR models across math reasoning tasks.
* **TraDo-8B-Thinking** is the first Long-CoT diffusion language model.
<p align="center">
<img src="https://github.com/yinjjiew/Data/raw/main/dllm-rl/figure1.png" width="100%"/>
</p>
<p align="center">
<img src="https://github.com/yinjjiew/Data/raw/main/dllm-rl/maintable.png" width="100%"/>
</p>
# Citation
```
@article{wang2025trado,
title={Revolutionizing Reinforcement Learning Framework for Diffusion Large Language Models},
author={Wang, Yinjie and Yang, Ling and Li, Bowen and Tian, Ye and Shen, Ke and Wang, Mengdi},
journal={arXiv preprint arXiv:2509.06949},
year={2025}
}
```
|