File size: 1,660 Bytes
2a33bf8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
---
pipeline_tag: text-generation
library_name: transformers
---

# dUltra: Ultra-Fast Diffusion Language Models via Reinforcement Learning

dUltra is an on-policy reinforcement learning framework based on Group Relative Policy Optimization (GRPO) that learns unmasking strategies for efficient parallel decoding in masked diffusion language models (MDLMs). By jointly optimizing the base diffusion LLM and an unmasking order planner, dUltra achieves superior accuracy-efficiency trade-offs on mathematical reasoning and code generation tasks.

- **Paper:** [dUltra: Ultra-Fast Diffusion Language Models via Reinforcement Learning](https://huggingface.co/papers/2512.21446)
- **GitHub Repository:** [chinsengi/dUltra-os](https://github.com/chinsengi/dUltra-os)

## Usage

To use this model, you can load it through the `transformers` library. Note that it requires `trust_remote_code=True` to load the custom model architecture.

```python
from model.llada.lladou import LLaDOUModelLM
from transformers import AutoTokenizer
import torch

model = LLaDOUModelLM.from_pretrained(
            "sengi/dUltra-math",
            trust_remote_code=True,
            torch_dtype=torch.bfloat16,
        )
tokenizer = AutoTokenizer.from_pretrained("sengi/dUltra-math")
```

## Citation

```bibtex
@misc{chen2025dultraultrafastdiffusionlanguage,
      title={dUltra: Ultra-Fast Diffusion Language Models via Reinforcement Learning},
      author={Shirui Chen and Jiantao Jiao and Lillian J. Ratliff and Banghua Zhu},
      year={2025},
      eprint={2512.21446},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2512.21446},
}
```