SecuCoder — GGUF

Quantized GGUF version of SecuCoder, a fine-tuned Llama 3.1 8B Instruct model for secure Python code generation and vulnerability remediation.

For full model details, training methodology, and evaluation results, see the main model card.


Available Files

File Quantization Size Use case
secucoder-Q4_K_M.gguf Q4_K_M ~4.6 GB Recommended — best balance of quality and size

Usage with Ollama

1. Download the Modelfile from this repo and create the model:

ollama create secucoder -f Modelfile

2. Run it:

ollama run secucoder

3. Or via API:

curl http://localhost:11434/api/generate -d '{
  "model": "secucoder",
  "prompt": "Fix the security vulnerability in this Python code.\n\n```python\nname = request.args.get(\"name\")\nresp = make_response(\"Your name is \" + name)\n```\n\nCWE: CWE-079",
  "stream": false
}'

Usage with llama.cpp

./llama-cli \
  -m secucoder-Q4_K_M.gguf \
  --ctx-size 4096 \
  --temp 0.1 \
  --top-p 0.9 \
  -p "You are a secure Python assistant. Fix the vulnerability in this code: ..."

Recommended Parameters

Parameter Value
temperature 0.1
top_p 0.9
num_ctx 4096
num_predict 3072

System Prompt

You are a secure Python assistant. Help identify, explain, and fix security issues in Python code. Prefer safe, practical, and production-ready solutions.

Evaluation

The full SecuCoder system (Q4 + structured prompting + RAG) achieves an overall score of 77.11 vs 60.34 for the untuned Llama 3.1 8B baseline — a +27.8% improvement measured by weighted static analysis findings (Bandit + Semgrep).

Variant Overall Score
Llama 3.1 8B Instruct (baseline) 60.34
SecuCoder Q4 (this model) 61.46
SecuCoder Q4 + structured prompt 64.46
SecuCoder Q4 + structured prompt + RAG 77.11

Related

Resource Link
Full model (safetensors) ivitopow/secucoder
Training dataset ivitopow/secucoder
Base model meta-llama/Llama-3.1-8B-Instruct

License

Released under CC BY-NC-SA 4.0. Built on Llama 3.1, subject to Meta's Llama 3 Community License.

Downloads last month
75
GGUF
Model size
8B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support