Mark as deprecated, point to nixpkgs-security-qwen-lora
Browse files
README.md
CHANGED
|
@@ -8,31 +8,34 @@ tags:
|
|
| 8 |
- lora
|
| 9 |
- nix
|
| 10 |
- patch-generation
|
|
|
|
| 11 |
datasets:
|
| 12 |
- odoom/nixpkgs-security-patches
|
| 13 |
---
|
| 14 |
|
| 15 |
-
# nixpkgs-security-lora
|
| 16 |
|
| 17 |
-
|
| 18 |
|
| 19 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 20 |
|
| 21 |
- **Base model**: [Mistral 7B Instruct v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
|
| 22 |
- **Method**: QLoRA (4-bit NF4 quantization + LoRA rank 32)
|
| 23 |
- **Target**: Cloudflare Workers AI `@cf/mistral/mistral-7b-instruct-v0.2-lora`
|
| 24 |
- **Adapter size**: 160 MB
|
| 25 |
-
- **
|
| 26 |
-
|
| 27 |
-
## Training
|
| 28 |
-
|
| 29 |
-
- **LoRA rank**: 32, alpha: 64, dropout: 0.05
|
| 30 |
-
- **Target modules**: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
|
| 31 |
-
- **Epochs**: 3 (110 steps)
|
| 32 |
-
- **Effective batch size**: 16 (batch 1 Γ gradient accumulation 16)
|
| 33 |
-
- **Learning rate**: 2e-4, cosine schedule
|
| 34 |
-
- **Max sequence length**: 4,096 tokens
|
| 35 |
-
- **Hardware**: NVIDIA L4 GPU (HuggingFace Jobs)
|
| 36 |
|
| 37 |
### Training Metrics
|
| 38 |
|
|
@@ -43,54 +46,7 @@ LoRA adapter for generating nixpkgs security patches. Fine-tuned on [odoom/nixpk
|
|
| 43 |
| Eval loss | β | 0.924 |
|
| 44 |
| Eval accuracy | β | 78.4% |
|
| 45 |
|
| 46 |
-
Training time: ~61 minutes.
|
| 47 |
-
|
| 48 |
-
## Training Data
|
| 49 |
-
|
| 50 |
-
586 training examples and 66 eval examples derived from merged security PRs in [NixOS/nixpkgs](https://github.com/NixOS/nixpkgs). Each example pairs a CVE description with the actual nix patch diff that fixed it.
|
| 51 |
-
|
| 52 |
-
Quality filters applied:
|
| 53 |
-
- Only merged PRs with security-related titles (CVE, vulnerability, security fix)
|
| 54 |
-
- **Removed version bumps and hash-only updates** β these are deterministic and don't need AI (763 examples filtered out)
|
| 55 |
-
- Kept only complex fixes: fetchpatch backports, patch additions, config changes, etc.
|
| 56 |
-
- Removed trivially small diffs (< 3 changed lines)
|
| 57 |
-
|
| 58 |
## Changelog
|
| 59 |
|
| 60 |
-
- **v2** (2026-03-03): Retrained on filtered dataset β removed 763 version bump / hash-only examples.
|
| 61 |
- **v1** (2026-03-02): Initial training on 1,273 unfiltered examples.
|
| 62 |
-
|
| 63 |
-
## Intended Use
|
| 64 |
-
|
| 65 |
-
This adapter is designed for the [Vulnpatch](https://github.com/Vulnpatch) automated security patch agent. Given a CVE description and affected package info, it generates candidate nix package patches.
|
| 66 |
-
|
| 67 |
-
## Usage with Cloudflare Workers AI
|
| 68 |
-
|
| 69 |
-
```javascript
|
| 70 |
-
const response = await env.AI.run(
|
| 71 |
-
"@cf/mistral/mistral-7b-instruct-v0.2-lora",
|
| 72 |
-
{
|
| 73 |
-
messages: [
|
| 74 |
-
{ role: "user", content: "Fix CVE-2024-1234 in package foo..." }
|
| 75 |
-
],
|
| 76 |
-
lora: "nixpkgs-security-lora"
|
| 77 |
-
}
|
| 78 |
-
);
|
| 79 |
-
```
|
| 80 |
-
|
| 81 |
-
## Usage with Transformers + PEFT
|
| 82 |
-
|
| 83 |
-
```python
|
| 84 |
-
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 85 |
-
from peft import PeftModel
|
| 86 |
-
|
| 87 |
-
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
|
| 88 |
-
model = PeftModel.from_pretrained(model, "odoom/nixpkgs-security-lora")
|
| 89 |
-
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
|
| 90 |
-
```
|
| 91 |
-
|
| 92 |
-
## Limitations
|
| 93 |
-
|
| 94 |
-
- Specialized for nixpkgs package expressions β not a general code model
|
| 95 |
-
- Training data is Nix-specific; won't generalize to other package managers
|
| 96 |
-
- May produce patches that need manual review for correctness
|
|
|
|
| 8 |
- lora
|
| 9 |
- nix
|
| 10 |
- patch-generation
|
| 11 |
+
- deprecated
|
| 12 |
datasets:
|
| 13 |
- odoom/nixpkgs-security-patches
|
| 14 |
---
|
| 15 |
|
| 16 |
+
# nixpkgs-security-lora (Deprecated)
|
| 17 |
|
| 18 |
+
> **This adapter is deprecated.** Use [odoom/nixpkgs-security-qwen-lora](https://huggingface.co/odoom/nixpkgs-security-qwen-lora) instead β Qwen 2.5 Coder 32B with multi-turn tool-calling, lower loss (0.54 vs 0.87), and higher accuracy (90% vs 80%).
|
| 19 |
|
| 20 |
+
## What Changed
|
| 21 |
+
|
| 22 |
+
| | v2 (this repo) | v3 (new repo) |
|
| 23 |
+
|---|---|---|
|
| 24 |
+
| Base model | Mistral 7B Instruct v0.2 | Qwen 2.5 Coder 32B Instruct |
|
| 25 |
+
| Format | Single-turn (system/user/assistant) | Multi-turn tool-calling conversations |
|
| 26 |
+
| Loss | 0.867 | 0.540 |
|
| 27 |
+
| Token accuracy | 80.5% | 90.1% |
|
| 28 |
+
| Adapter size | 160 MB | 256 MB |
|
| 29 |
+
| Tool calling | Broken (`raw: true` disabled it) | Native Qwen 2.5 tool calling |
|
| 30 |
+
|
| 31 |
+
## Original Model Details (v2)
|
| 32 |
|
| 33 |
- **Base model**: [Mistral 7B Instruct v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
|
| 34 |
- **Method**: QLoRA (4-bit NF4 quantization + LoRA rank 32)
|
| 35 |
- **Target**: Cloudflare Workers AI `@cf/mistral/mistral-7b-instruct-v0.2-lora`
|
| 36 |
- **Adapter size**: 160 MB
|
| 37 |
+
- **Training data**: 586 complex security patches (version bumps filtered out)
|
| 38 |
+
- **Epochs**: 3 (110 steps), ~61 minutes on NVIDIA L4
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 39 |
|
| 40 |
### Training Metrics
|
| 41 |
|
|
|
|
| 46 |
| Eval loss | β | 0.924 |
|
| 47 |
| Eval accuracy | β | 78.4% |
|
| 48 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 49 |
## Changelog
|
| 50 |
|
| 51 |
+
- **v2** (2026-03-03): Retrained on filtered dataset β removed 763 version bump / hash-only examples.
|
| 52 |
- **v1** (2026-03-02): Initial training on 1,273 unfiltered examples.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|