Scaling Diffusion Language Models via Adaptation from Autoregressive Models
Paper • 2410.17891 • Published • 18
This model is a fine-tuned version of [llama2].
Details and model loading can be seen https://github.com/HKUNLP/DiffuLLaMA.
@misc{gong2024scalingdiffusionlanguagemodels,
title={Scaling Diffusion Language Models via Adaptation from Autoregressive Models},
author={Shansan Gong and Shivam Agarwal and Yizhe Zhang and Jiacheng Ye and Lin Zheng and Mukai Li and Chenxin An and Peilin Zhao and Wei Bi and Jiawei Han and Hao Peng and Lingpeng Kong},
year={2024},
eprint={2410.17891},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.17891},
}
Base model
meta-llama/Llama-2-7b-hf
Install from pip and serve model
# Install vLLM from pip: pip install vllm# Start the vLLM server: vllm serve "diffusionfamily/diffullama"# Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "diffusionfamily/diffullama", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'