nzl-thu's picture
Create README.md
19482ac verified
---
license: mit
library_name: transformers
pipeline_tag: text-generation
base_model: GSAI-ML/LLaDA-8B-Instruct
tags:
- code
- reasoning
- diffusion-language-model
---
# LLaDA-Instruct-JustGRPO-Code
This model is [LLaDA-8B-Instruct](https://huggingface.co/GSAI-ML/LLaDA-8B-Instruct) fine-tuned with **JustGRPO** on coding tasks.
It was introduced in the paper [The Flexibility Trap: Why Arbitrary Order Limits Reasoning Potential in Diffusion Language Models](https://huggingface.co/papers/2601.15165).
## Method
JustGRPO is a minimalist RL approach for diffusion language models. Instead of complex diffusion-specific RL adaptations, we simply treat dLLMs as autoregressive models during training and apply standard GRPO. See our paper for details.
- **Project Page:** [https://nzl-thu.github.io/the-flexibility-trap](https://nzl-thu.github.io/the-flexibility-trap)
- **Paper:** [arXiv:2601.15165](https://arxiv.org/abs/2601.15165)
- **Code:** [https://github.com/LeapLabTHU/JustGRPO](https://github.com/LeapLabTHU/JustGRPO)
## Performance
### HumanEval
| Sequence Length | 128 | 256 | 512 |
| :-------------: | :--: | :--: | :--: |
| **Pass@1 (%)** | 37.8 | 49.4 | 48.7 |
### MBPP
| Sequence Length | 128 | 256 | 512 |
| :-------------: | :--: | :--: | :--: |
| **Pass@1 (%)** | 50.6 | 52.4 | 49.0 |
## Usage
For generation and evaluation, please refer to our [GitHub repository](https://github.com/LeapLabTHU/JustGRPO).
## Citation
```bibtex
@article{ni2026flexibility,
title={The Flexibility Trap: Why Arbitrary Order Limits Reasoning Potential in Diffusion Language Models},
author={Ni, Zanlin and Wang, Shenzhi and Yue, Yang and Yu, Tianyu and Zhao, Weilin and Hua, Yeguo and Chen, Tianyi and Song, Jun and Yu, Cheng and Zheng, Bo and Huang, Gao},
journal={arXiv preprint arXiv:2601.15165},
year={2026}
}
```