diffuse-cpp: C++ inference engine for LLaDA on CPU (GGUF format, Q4_K_M quantization)

#17
by Carmenest - opened

Hi @GSAI-ML team,

We've built diffuse-cpp, the first C++ inference engine for LLaDA, using the GGML tensor library (same foundation as llama.cpp).

What it does:

  • Runs LLaDA-8B-Instruct on CPU only β€” no GPU required
  • Supports F16, Q8_0, and Q4_K_M quantization via GGUF format
  • Includes a SafeTensors β†’ GGUF converter for your model
  • Entropy-exit adaptive scheduling: reduces steps from 16 to 3–4 on easy prompts

Results (AMD EPYC 12-core, Q4_K_M):

  • 9–11 tok/s on factual prompts with entropy-exit
  • 7.4Γ— thread scaling (near-linear up to physical core count)
  • Outperforms llama.cpp (8.51 tok/s with Llama-3-8B) on easy prompts

Pre-quantized models available:
https://huggingface.co/diffuse-cpp/LLaDA-8B-Instruct-GGUF

Engine source:
https://github.com/iafiscal1212/diffuse-cpp

We've also launched a Kaggle hackathon to benchmark across diverse hardware:
https://www.kaggle.com/competitions/cpu-inference-challenge-diffusion-vs-autoregressive-on-your-hardware

The key finding is that diffusion models have a computational advantage on CPUs due to the memory-compute regime inversion. We'd love feedback from the LLaDA team on potential optimizations.

Paper with full methodology: https://doi.org/10.5281/zenodo.19128920

Sign up or log in to comment