--- base_model: - GSAI-ML/LLaDA-8B-Instruct pipeline_tag: text-generation --- # RemeDi: Remasking-enabled Diffusion Language Model
[![weixin](https://img.shields.io/badge/-WeChat@MAPLE实验室-000000?logo=wechat&logoColor=07C160)](https://mp.weixin.qq.com/s/UefnjlCSi6YvzVe-Xu9jjQ) [![RemeDi](https://img.shields.io/badge/Paper-RemeDi-2b9348.svg?logo=arXiv)](https://arxiv.org/abs/2509.23653)  [![Static Badge](https://img.shields.io/badge/Model(9B)-yellow?logoColor=violet&label=%F0%9F%A4%97%20RemeDi-Instruct%20checkpoint)](https://huggingface.co/maple-research-lab/RemeDi-Instruct)  [![Static Badge](https://img.shields.io/badge/Model(9B)-yellow?logoColor=violet&label=%F0%9F%A4%97%20RemeDi-RL%20checkpoints)](https://huggingface.co/maple-research-lab/RemeDi-RL) 
# 🔬 Method Overview RemeDi lets every token be revised at every diffusion step. Instead of fixing in an early guess, the model evaluates the quality of each token and can remask low-confidence positions, allowing later steps to resample them with richer context—built-in self-correction. RemeDi extends the original model with a dual-stream transformer: - Token Prediction Stream (TPS) predicts masked tokens as usual. - Unmasking Policy Stream (UPS) outputs per-token confidence scores, deciding which tokens to unmask or remask. At each denoising step, tokens with low confidence can be remasked and resampled, enabling iterative refinement. For the training and RL algorithms, see the Methods section of the paper.

RemeDi architecture and performance radar

# 📈 Key Results

RemeDi performance table

# 🚀 Inference To run inference, execute: ```sh git clone https://github.com/maple-research-lab/RemeDi.git cd RemeDi # chat with remedi python inference.py ``` # 📥 Citation ``` @article{huang2025don, title={Don't Settle Too Early: Self-Reflective Remasking for Diffusion Language Models}, author={Huang, Zemin and Wang, Yuhang and Chen, Zhiyang and Qi, Guo-Jun}, journal={arXiv preprint arXiv:2509.23653}, year={2025} } ```