File size: 1,559 Bytes
a8fe1c5 3994262 a8fe1c5 87e1945 a8fe1c5 2656a55 a8fe1c5 2656a55 87e1945 2656a55 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
---
license: bsd-2-clause
language:
- en
base_model:
- GSAI-ML/LLaDA-8B-Instruct
library_name: transformers
tags:
- DLM
- EvoToken
- lora
- text-generation
---
# EvoTokenDLM LoRA adapter training from pretrained weights LLaDA-8B-Instruct
Starting from the original MDLM (Masked Discrete Diffusion Language Model) LLaDA-8B-Instruct, we trained the EvoTokenDLM LoRA adapter using the **Continuous Trajectory Supervision** method.
Our implementation replaces traditional hard binary masks with evolving soft token distributions. This allows EvoTokenDLM to facilitate a progressive transition from masked states to discrete outputs, effectively supporting revisable decoding.
The method and its results are detailed in the paper: [Beyond Hard Masks: Progressive Token Evolution for Diffusion Language Models](https://arxiv.org/abs/2601.07351).
## How to Use
⚠️ **Important:** This is a LoRA adapter and requires the official EvoTokenDLM codebase for inference.
For detailed instructions and code, please refer to the official GitHub repository: [EvoTokenDLM GitHub Repository](https://github.com/aim-uofa/EvoTokenDLM)
## Citation
If you find this work helpful for your research, please cite:
```BibTeX
@article{zhong2026beyond,
title={Beyond Hard Masks: Progressive Token Evolution for Diffusion Language Models},
author={Zhong, Linhao and Wu, Linyu and Fang, Bozhen and Feng, Tianjian and Jing, Chenchen and Wang, Wen and Zhang, Jiaheng and Chen, Hao and Shen, Chunhua},
journal={arXiv preprint arXiv:2601.07351},
year={2026}
}
``` |