You need to agree to share your contact information to access this dataset
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
This repository contains a security proof-of-concept for a vulnerability in llama.cpp GGUF model loading. Access is restricted to authorized security researchers. By requesting access you confirm you will use these files solely for security research and responsible disclosure.
Log in or Sign Up to review the conditions and access this dataset content.
PoC: Heap-Buffer-Overflow Read in llama.cpp UGM Tokenizer (precompiled_charsmap)
Vulnerability
A heap-buffer-overflow read exists in llm_tokenizer_ugm::llm_tokenizer_ugm() in
src/llama-vocab.cpp when loading a GGUF model file with an undersized
tokenizer.ggml.precompiled_charsmap array (1-3 bytes).
The constructor checks size() > 0 but then immediately dereferences the first 4 bytes
as a uint32_t pointer to read the XCDA blob size. A buffer smaller than 4 bytes causes
a heap-buffer-overflow read of 1-3 bytes past the allocated region.
- Type: CWE-125 (Out-of-bounds Read)
- Location:
src/llama-vocab.cpp:809 - Trigger: Loading a crafted GGUF file with
tokenizer.ggml.model = "t5"(UGM tokenizer path) - PoC file size: 396 bytes
- Impact: Information disclosure (heap data leak into xcda_blob_size), denial of service
Files
| File | Description |
|---|---|
poc_charsmap_undersize.py |
Generates the minimal 396-byte crafted GGUF file |
reproduce.sh |
One-command reproduction: clone, build with ASAN, generate PoC, trigger bug |
SUBMISSION.md |
Full vulnerability writeup |
Reproduction
chmod +x reproduce.sh
./reproduce.sh
Requires: git, cmake, python3, a C++ compiler with AddressSanitizer support.
Responsible Disclosure
This PoC is provided for security research purposes under responsible disclosure. Do not use against systems without authorization.
- Downloads last month
- 12