EvoTokenDLM LoRA adapter training from pretrained weights LLaDA-8B-Instruct
Starting from the original MDLM (Masked Discrete Diffusion Language Model) LLaDA-8B-Instruct, we trained the EvoTokenDLM LoRA adapter using the Continuous Trajectory Supervision method.
Our implementation replaces traditional hard binary masks with evolving soft token distributions. This allows EvoTokenDLM to facilitate a progressive transition from masked states to discrete outputs, effectively supporting revisable decoding.
The method and its results are detailed in the paper: Beyond Hard Masks: Progressive Token Evolution for Diffusion Language Models.
How to Use
⚠️ Important: This is a LoRA adapter and requires the official EvoTokenDLM codebase for inference.
For detailed instructions and code, please refer to the official GitHub repository: EvoTokenDLM GitHub Repository
Citation
If you find this work helpful for your research, please cite:
@article{zhong2026beyond,
title={Beyond Hard Masks: Progressive Token Evolution for Diffusion Language Models},
author={Zhong, Linhao and Wu, Linyu and Fang, Bozhen and Feng, Tianjian and Jing, Chenchen and Wang, Wen and Zhang, Jiaheng and Chen, Hao and Shen, Chunhua},
journal={arXiv preprint arXiv:2601.07351},
year={2026}
}
Model tree for zhongzero/EvoToken_LLaDA_Instruct_8B_Lora
Base model
GSAI-ML/LLaDA-8B-Instruct