--- pipeline_tag: text-generation --- # Fine-Tuning Discrete Diffusion Models with Policy Gradient Methods This repository contains the code for the `SEPO` algorithm presented in the paper: [Fine-Tuning Discrete Diffusion Models with Policy Gradient Methods](https://huggingface.co/papers/2502.01384). `SEPO` (Score Entropy Policy Optimization) is an efficient, broadly applicable, and theoretically justified policy gradient algorithm for fine-tuning discrete diffusion models over non-differentiable rewards. Our numerical experiments across several discrete generative tasks demonstrate the scalability and efficiency of our method, including applications on fine-tuning a masked diffusion language model on DNA sequences.

Denoising RLHF process visualization

For more details and the full implementation, please refer to the [official GitHub repository](https://github.com/ozekri/SEPO). ## Sample Usage: Download Checkpoint You can download the fine-tuned models from Hugging Face directly using the `huggingface_hub` Python library to reproduce results: ```python from huggingface_hub import hf_hub_download # Example: Download the SEPO fine-tuned model checkpoint ckpt_path = hf_hub_download( repo_id="Xssama/SEPO_DNA", filename="finetuned_sepo_kl.ckpt", # finetuned_sepo_kl_gf.ckpt for SEPO with gradient flow cache_dir="./checkpoints" # Optional: specify your preferred local directory ) print(f"Checkpoint downloaded to: {ckpt_path}") ``` Alternatively, you can use `wget`: ```bash wget https://huggingface.co/Xssama/SEPO-DNA/resolve/main/finetuned_sepo_kl.ckpt -P ./checkpoints/ ``` ## Citation If you find this work useful in your research, please consider citing: ```bibtex @article{zekri2025fine, title={Fine-Tuning Discrete Diffusion Models with Policy Gradient Methods}, author={Zekri, Oussama and Boull{\'e}, Nicolas}, journal={arXiv preprint arXiv:2502.01384}, year={2025} } ```