metadata
license: apache-2.0
library_name: transformers
We introduce LightningRL, a reinforcement learning post-training framework for block-wise diffusion Large Language Models (dLLMs) that breaks the accuracy–parallelism trade-off. Applied to SDAR-8B, LightningRL achieves 7.32 average TPF and 497.9 AUP — simultaneously improving both generation quality and inference speed.
- LightningRL-8B-32b-MATH500, LightningRL-8B-32b-GSM8K, LightningRL-8B-32b-MBPP, and LightningRL-8B-32b-HumanEval are task-specific variants fine-tuned with different reward weight configurations for targeted deployment.
Citation
@misc{hu2026lightningrlbreakingaccuracyparallelismtradeoff,
title={LightningRL: Breaking the Accuracy-Parallelism Trade-off of Block-wise dLLMs via Reinforcement Learning},
author={Yanzhe Hu and Yijie Jin and Pengfei Liu and Kai Yu and Zhijie Deng},
year={2026},
eprint={2603.13319},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2603.13319},
}