Scaling Diffusion Language Models via Adaptation from Autoregressive Models
Paper • 2410.17891 • Published • 18
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf QuantFactory/diffullama-GGUF:# Run inference directly in the terminal:
llama-cli -hf QuantFactory/diffullama-GGUF:# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf QuantFactory/diffullama-GGUF:# Run inference directly in the terminal:
./llama-cli -hf QuantFactory/diffullama-GGUF:git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf QuantFactory/diffullama-GGUF:# Run inference directly in the terminal:
./build/bin/llama-cli -hf QuantFactory/diffullama-GGUF:docker model run hf.co/QuantFactory/diffullama-GGUF:This is quantized version of diffusionfamily/diffullama created using llama.cpp
This model is a fine-tuned version of [llama2].
Details and model loading can be seen https://github.com/HKUNLP/DiffuLLaMA.
@misc{gong2024scalingdiffusionlanguagemodels,
title={Scaling Diffusion Language Models via Adaptation from Autoregressive Models},
author={Shansan Gong and Shivam Agarwal and Yizhe Zhang and Jiacheng Ye and Lin Zheng and Mukai Li and Chenxin An and Peilin Zhao and Wei Bi and Jiawei Han and Hao Peng and Lingpeng Kong},
year={2024},
eprint={2410.17891},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.17891},
}
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Base model
meta-llama/Llama-2-7b-hf
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/diffullama-GGUF:# Run inference directly in the terminal: llama-cli -hf QuantFactory/diffullama-GGUF: