LLaDA-Instruct-JustGRPO

This model is LLaDA-8B-Instruct fine-tuned with JustGRPO on GSM8K.

It was introduced in the paper The Flexibility Trap: Why Arbitrary Order Limits Reasoning Potential in Diffusion Language Models.

Method

JustGRPO is a minimalist RL approach for diffusion language models. Instead of complex diffusion-specific RL adaptations, we simply treat dLLMs as autoregressive models during training and apply standard GRPO. See our paper for details.

Performance on GSM8K

Sequence Length 128 256 512
Accuracy (%) 83.8 89.1 89.8

Usage

For generation and evaluation, please refer to our GitHub repository.

Citation

@article{ni2026flexibility,
  title={The Flexibility Trap: Why Arbitrary Order Limits Reasoning Potential in Diffusion Language Models},
  author={Ni, Zanlin and Wang, Shenzhi and Yue, Yang and Yu, Tianyu and Zhao, Weilin and Hua, Yeguo and Chen, Tianyi and Song, Jun and Yu, Cheng and Zheng, Bo and Huang, Gao},
  journal={arXiv preprint arXiv:2601.15165},
  year={2026}
}
Downloads last month
40
Safetensors
Model size
8B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nzl-thu/LLaDA-Instruct-JustGRPO

Finetuned
(18)
this model

Paper for nzl-thu/LLaDA-Instruct-JustGRPO