odoom/nixpkgs-security-patches
Viewer • Updated • 654 • 64
How to use odoom/nixpkgs-security-lora with PEFT:
from peft import PeftModel
from transformers import AutoModelForCausalLM
base_model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
model = PeftModel.from_pretrained(base_model, "odoom/nixpkgs-security-lora")This adapter is deprecated. Use odoom/nixpkgs-security-qwen-lora instead — Qwen 2.5 Coder 32B with multi-turn tool-calling, lower loss (0.54 vs 0.87), and higher accuracy (90% vs 80%).
| v2 (this repo) | v3 (new repo) | |
|---|---|---|
| Base model | Mistral 7B Instruct v0.2 | Qwen 2.5 Coder 32B Instruct |
| Format | Single-turn (system/user/assistant) | Multi-turn tool-calling conversations |
| Loss | 0.867 | 0.540 |
| Token accuracy | 80.5% | 90.1% |
| Adapter size | 160 MB | 256 MB |
| Tool calling | Broken (raw: true disabled it) |
Native Qwen 2.5 tool calling |
@cf/mistral/mistral-7b-instruct-v0.2-lora| Metric | Start | End |
|---|---|---|
| Loss | 1.166 | 0.867 |
| Token accuracy | 74.6% | 80.5% |
| Eval loss | — | 0.924 |
| Eval accuracy | — | 78.4% |
Base model
mistralai/Mistral-7B-Instruct-v0.2
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2") model = PeftModel.from_pretrained(base_model, "odoom/nixpkgs-security-lora")