Load-Time Division-by-Zero Crash in llama.cpp GGUF Metadata Parser

โš ๏ธ This repository does not contain a machine learning model.
The file provided is intentionally malformed and exists solely for security research and vulnerability reproduction.

PoC Artifact

  • poc_fp_exception.gguf

Affected Component

  • llama.cpp โ€” GGUF metadata parser

Vulnerability Summary

A malformed GGUF file can trigger a deterministic crash during model loading due to a division-by-zero condition in GGUF tensor shape validation.

The crash occurs at load time, before inference or tensor data processing, and results in immediate process termination.

Reproduction

From a llama.cpp build directory:

./llama-gguf poc_fp_exception.gguf r
Downloads last month
-
GGUF
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support