|
|
--- |
|
|
base_model: |
|
|
- GSAI-ML/LLaDA-8B-Instruct |
|
|
datasets: |
|
|
- TIGER-Lab/AceCode-87K |
|
|
license: mit |
|
|
pipeline_tag: text-generation |
|
|
library_name: transformers |
|
|
--- |
|
|
|
|
|
# ESPO: Principled RL for Diffusion LLMs Emerges from a Sequence-Level Perspective |
|
|
|
|
|
This repository contains Post-Training Full models on code tasks based on LLaDA-8B-Instruct for the paper [Principled RL for Diffusion LLMs Emerges from a Sequence-Level Perspective](https://huggingface.co/papers/2512.03759). |
|
|
|
|
|
ESPO (ELBO-based Sequence-level Policy Optimization) is a principled reinforcement learning framework for Diffusion Large Language Models (dLLMs). Unlike traditional autoregressive RL methods (e.g., GRPO) that rely on token-level likelihoods, ESPO views the **entire sequence generation as a single action** and leverages the **ELBO** as a tractable proxy for sequence-level likelihood. This design resolves the fundamental mismatch between RL and the non-autoregressive nature of dLLMs. |
|
|
|
|
|
ESPO introduces: |
|
|
- **Sequence-level optimization** for diffusion LLMs via the ELBO objective. |
|
|
- **Per-token normalized ratio estimation** and **robust KL regularization** for stable large-scale training. |
|
|
- **Consistent gains** across math, coding, and planning benchmarks. |
|
|
|
|
|
**Project Page**: [https://jingyangou.github.io/ESPO-Demo/](https://jingyangou.github.io/ESPO-Demo/) |
|
|
**Code**: [https://github.com/ML-GSAI/ESPO](https://github.com/ML-GSAI/ESPO) |
|
|
|
|
|
<div style="display: flex; justify-content: center; flex-wrap: wrap;"> |
|
|
<img src="https://github.com/ML-GSAI/ESPO/raw/main/fig/sudoku_ablation_1_smoothed-page1.png" style="width: 49%" /> |
|
|
<img src="https://github.com/ML-GSAI/ESPO/raw/main/fig/sudoku_kl_ablation_smoothed.png" style="width: 49%" /> |
|
|
</div> |
|
|
|
|
|
## Quickstart |
|
|
|
|
|
We release ESPO-fine-tuned checkpoints built on LLaDA-8B-Instruct. ESPO-Code is released as a full fine-tuned model (no LoRA). ESPO-GSM8K, ESPO-Math, ESPO-Countdown, and ESPO-Sudoku are provided as LoRA adapters, which can be loaded on top of the base LLaDA-8B-Instruct model for lightweight and efficient fine-tuning. |
|
|
|
|
|
```python |
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
from peft import PeftModel |
|
|
# Note: 'eval.generate_utils' is part of the original ESPO GitHub repository. |
|
|
# You might need to clone the repository (https://github.com/ML-GSAI/ESPO) |
|
|
# and add its root directory to your Python path to import `eval.generate_utils`. |
|
|
from eval.generate_utils import generate |
|
|
|
|
|
base_model_path = 'GSAI-ML/LLaDA-8B-Instruct' |
|
|
peft_model_path = 'GSAI-ML/ESPO-Math' # Example: change to ESPO-Code for the full model |
|
|
tokenizer = AutoTokenizer.from_pretrained(base_model_path) |
|
|
model = AutoModelForCausalLM.from_pretrained( |
|
|
base_model_path, trust_remote_code=True,torch_dtype="bfloat16", device_map="cuda") |
|
|
peft_model = PeftModel.from_pretrained(model, peft_model_path, device_map="cuda") |
|
|
prompt = "The point $(0,0)$ is reflected over the vertical line $x=1$. When its image is then reflected over the line $y=2$, what is the resulting point? |
|
|
|
|
|
Write your answer in the form $(x, y)$ where $x$ and $y$ are real numbers." |
|
|
messages = [{"role": "user", "content": prompt}] |
|
|
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda") |
|
|
output_ids = generate(peft_model, input_ids,tokenizer, steps=128, gen_length=256, temperature=0.9,remasking="low_confidence",) |
|
|
output_text = tokenizer.batch_decode(output_ids[:, input_ids.shape[1]:], skip_special_tokens=True)[0] |
|
|
print(output_text) |
|
|
``` |
|
|
|
|
|
## Citation |
|
|
If you find ESPO useful in your research, please consider citing our paper: |
|
|
|
|
|
```bibtex |
|
|
@article{ou2025principledrldiffusionllms, |
|
|
title={Principled RL for Diffusion LLMs Emerges from a Sequence-Level Perspective}, |
|
|
author={Jingyang Ou and Jiaqi Han and Minkai Xu and Shaoxuan Xu and Jianwen Xie and Stefano Ermon and Yi Wu and Chongxuan Li}, |
|
|
journal={arXiv preprint arXiv:2512.03759}, |
|
|
year={2025}, |
|
|
} |
|
|
``` |