Text Generation
Transformers
Safetensors
llama
llama-factory
full
diffusion
text-generation-inference
How to use from
vLLM
Install from pip and serve model
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "diffusionfamily/diffullama"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "diffusionfamily/diffullama",
		"prompt": "Once upon a time,",
		"max_tokens": 512,
		"temperature": 0.5
	}'
Use Docker
docker model run hf.co/diffusionfamily/diffullama
Quick Links

diffullama

This model is a fine-tuned version of [llama2].

Model description

Details and model loading can be seen https://github.com/HKUNLP/DiffuLLaMA.

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.1.1+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1
@misc{gong2024scalingdiffusionlanguagemodels,
      title={Scaling Diffusion Language Models via Adaptation from Autoregressive Models}, 
      author={Shansan Gong and Shivam Agarwal and Yizhe Zhang and Jiacheng Ye and Lin Zheng and Mukai Li and Chenxin An and Peilin Zhao and Wei Bi and Jiawei Han and Hao Peng and Lingpeng Kong},
      year={2024},
      eprint={2410.17891},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2410.17891}, 
}
Downloads last month
434
Safetensors
Model size
7B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 1 Ask for provider support

Model tree for diffusionfamily/diffullama

Finetuned
(966)
this model
Adapters
1 model
Quantizations
2 models

Dataset used to train diffusionfamily/diffullama

Paper for diffusionfamily/diffullama