| ---
|
| license: cc-by-nc-sa-4.0
|
| language:
|
| - en
|
| base_model: ivitopow/secucoder
|
| tags:
|
| - code
|
| - security
|
| - python
|
| - gguf
|
| - ollama
|
| - llama-cpp
|
| - cybersecurity
|
| - secure-coding
|
| - quantized
|
| task_categories:
|
| - text-generation
|
| ---
|
|
|
| # SecuCoder — GGUF
|
|
|
| Quantized GGUF version of [SecuCoder](https://huggingface.co/ivitopow/secucoder), a fine-tuned Llama 3.1 8B Instruct model for secure Python code generation and vulnerability remediation.
|
|
|
| For full model details, training methodology, and evaluation results, see the [main model card](https://huggingface.co/ivitopow/secucoder).
|
|
|
| ---
|
|
|
| ## Available Files
|
|
|
| | File | Quantization | Size | Use case |
|
| |---|---|---|---|
|
| | `secucoder-Q4_K_M.gguf` | Q4_K_M | ~4.6 GB | Recommended — best balance of quality and size |
|
|
|
| ---
|
|
|
| ## Usage with Ollama
|
|
|
| **1. Download the Modelfile from this repo and create the model:**
|
|
|
| ```bash
|
| ollama create secucoder -f Modelfile
|
| ```
|
|
|
| **2. Run it:**
|
|
|
| ```bash
|
| ollama run secucoder
|
| ```
|
|
|
| **3. Or via API:**
|
|
|
| ```bash
|
| curl http://localhost:11434/api/generate -d '{
|
| "model": "secucoder",
|
| "prompt": "Fix the security vulnerability in this Python code.\n\n```python\nname = request.args.get(\"name\")\nresp = make_response(\"Your name is \" + name)\n```\n\nCWE: CWE-079",
|
| "stream": false
|
| }'
|
| ```
|
|
|
| ---
|
|
|
| ## Usage with llama.cpp
|
|
|
| ```bash
|
| ./llama-cli \
|
| -m secucoder-Q4_K_M.gguf \
|
| --ctx-size 4096 \
|
| --temp 0.1 \
|
| --top-p 0.9 \
|
| -p "You are a secure Python assistant. Fix the vulnerability in this code: ..."
|
| ```
|
|
|
| ---
|
|
|
| ## Recommended Parameters
|
|
|
| | Parameter | Value |
|
| |---|---|
|
| | `temperature` | 0.1 |
|
| | `top_p` | 0.9 |
|
| | `num_ctx` | 4096 |
|
| | `num_predict` | 3072 |
|
|
|
| ---
|
|
|
| ## System Prompt
|
|
|
| ```
|
| You are a secure Python assistant. Help identify, explain, and fix security issues in Python code. Prefer safe, practical, and production-ready solutions.
|
| ```
|
|
|
| ---
|
|
|
| ## Evaluation
|
|
|
| The full SecuCoder system (Q4 + structured prompting + RAG) achieves an overall score of **77.11** vs **60.34** for the untuned Llama 3.1 8B baseline — a **+27.8% improvement** measured by weighted static analysis findings (Bandit + Semgrep).
|
|
|
| | Variant | Overall Score |
|
| |---|---|
|
| | Llama 3.1 8B Instruct (baseline) | 60.34 |
|
| | SecuCoder Q4 (this model) | 61.46 |
|
| | SecuCoder Q4 + structured prompt | 64.46 |
|
| | SecuCoder Q4 + structured prompt + RAG | **77.11** |
|
|
|
| ---
|
|
|
| ## Related
|
|
|
| | Resource | Link |
|
| |---|---|
|
| | Full model (safetensors) | [ivitopow/secucoder](https://huggingface.co/ivitopow/secucoder) |
|
| | Training dataset | [ivitopow/secucoder](https://huggingface.co/datasets/ivitopow/secucoder) |
|
| | Base model | [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) |
|
|
|
| ---
|
|
|
| ## License
|
|
|
| Released under [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/). Built on Llama 3.1, subject to [Meta's Llama 3 Community License](https://llama.meta.com/llama3/license/). |