Instructions to use kurtpayne/skillscan-detector-v4 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use kurtpayne/skillscan-detector-v4 with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="kurtpayne/skillscan-detector-v4", filename="skillscan-detector-v4-q4_k_m.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use kurtpayne/skillscan-detector-v4 with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf kurtpayne/skillscan-detector-v4:Q4_K_M # Run inference directly in the terminal: llama-cli -hf kurtpayne/skillscan-detector-v4:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf kurtpayne/skillscan-detector-v4:Q4_K_M # Run inference directly in the terminal: llama-cli -hf kurtpayne/skillscan-detector-v4:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf kurtpayne/skillscan-detector-v4:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf kurtpayne/skillscan-detector-v4:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf kurtpayne/skillscan-detector-v4:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf kurtpayne/skillscan-detector-v4:Q4_K_M
Use Docker
docker model run hf.co/kurtpayne/skillscan-detector-v4:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use kurtpayne/skillscan-detector-v4 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "kurtpayne/skillscan-detector-v4" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "kurtpayne/skillscan-detector-v4", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/kurtpayne/skillscan-detector-v4:Q4_K_M
- Ollama
How to use kurtpayne/skillscan-detector-v4 with Ollama:
ollama run hf.co/kurtpayne/skillscan-detector-v4:Q4_K_M
- Unsloth Studio new
How to use kurtpayne/skillscan-detector-v4 with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for kurtpayne/skillscan-detector-v4 to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for kurtpayne/skillscan-detector-v4 to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for kurtpayne/skillscan-detector-v4 to start chatting
- Pi new
How to use kurtpayne/skillscan-detector-v4 with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf kurtpayne/skillscan-detector-v4:Q4_K_M
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "kurtpayne/skillscan-detector-v4:Q4_K_M" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use kurtpayne/skillscan-detector-v4 with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf kurtpayne/skillscan-detector-v4:Q4_K_M
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default kurtpayne/skillscan-detector-v4:Q4_K_M
Run Hermes
hermes
- Docker Model Runner
How to use kurtpayne/skillscan-detector-v4 with Docker Model Runner:
docker model run hf.co/kurtpayne/skillscan-detector-v4:Q4_K_M
- Lemonade
How to use kurtpayne/skillscan-detector-v4 with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull kurtpayne/skillscan-detector-v4:Q4_K_M
Run and chat with the model
lemonade run user.skillscan-detector-v4-Q4_K_M
List all available models
lemonade list
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf kurtpayne/skillscan-detector-v4:Q4_K_M# Run inference directly in the terminal:
llama-cli -hf kurtpayne/skillscan-detector-v4:Q4_K_MUse pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf kurtpayne/skillscan-detector-v4:Q4_K_M# Run inference directly in the terminal:
./llama-cli -hf kurtpayne/skillscan-detector-v4:Q4_K_MBuild from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf kurtpayne/skillscan-detector-v4:Q4_K_M# Run inference directly in the terminal:
./build/bin/llama-cli -hf kurtpayne/skillscan-detector-v4:Q4_K_MUse Docker
docker model run hf.co/kurtpayne/skillscan-detector-v4:Q4_K_MSkillScan Detector v4
A fine-tuned Qwen2.5-1.5B-Instruct model for detecting security threats in AI agent skill files.
What it does
Analyzes AI agent skill files (.md) and outputs structured JSON with:
- Verdict: benign or malicious
- Labels: specific attack types detected
- Confidence: 0-1 score
- Reasoning: human-readable explanation citing evidence from the text
Attack types detected
| Class | F1 | Precision | Recall |
|---|---|---|---|
| path_traversal | 0.857 | 0.882 | 0.833 |
| social_engineering | 0.857 | 0.750 | 1.000 |
| prompt_injection | 0.474 | 0.941 | 0.317 |
| code_injection | 0.424 | 0.438 | 0.412 |
| supply_chain | 0.340 | 0.258 | 0.500 |
| evasion | 0.308 | 0.182 | 1.000 |
| data_exfiltration | 0.148 | 0.080 | 1.000 |
Macro F1: 0.487 | Verdict accuracy: 85.2% | Parse failures: 1.2%
Usage with llama-cpp-python
Requirement already satisfied: llama-cpp-python in /opt/homebrew/lib/python3.11/site-packages (0.3.20) Requirement already satisfied: typing-extensions>=4.5.0 in /opt/homebrew/lib/python3.11/site-packages (from llama-cpp-python) (4.15.0) Requirement already satisfied: numpy>=1.20.0 in /opt/homebrew/lib/python3.11/site-packages (from llama-cpp-python) (1.26.4) Requirement already satisfied: diskcache>=5.6.1 in /opt/homebrew/lib/python3.11/site-packages (from llama-cpp-python) (5.6.3) Requirement already satisfied: jinja2>=2.11.3 in /opt/homebrew/lib/python3.11/site-packages (from llama-cpp-python) (3.1.4) Requirement already satisfied: MarkupSafe>=2.0 in /opt/homebrew/lib/python3.11/site-packages (from jinja2>=2.11.3->llama-cpp-python) (2.0.1)
Download the GGUF model: [33m⚠️ Warning: 'huggingface-cli download' is deprecated. Use 'hf download' instead.[0m
Model details
- Base model: Qwen/Qwen2.5-1.5B-Instruct (Apache 2.0)
- Fine-tuning: QLoRA (r=32, alpha=64) on 20,035 teacher-distilled examples
- Teachers: Claude Sonnet + GPT-4o (structured security analysis)
- Quantization: GGUF Q4_K_M (935 MB)
- Inference: CPU-only via llama.cpp, ~2-4s per file
- License: Apache 2.0
Files
| File | Size | Description |
|---|---|---|
| 935 MB | Quantized model for CPU inference | |
| 3.1 GB | Full FP16 weights |
Part of SkillScan
This model is used by skillscan-security, the open-source CLI scanner for AI agent skill files.
Requirement already satisfied: skillscan-security[ml] in /opt/homebrew/lib/python3.11/site-packages (0.8.0) Requirement already satisfied: typer>=0.12.3 in /opt/homebrew/lib/python3.11/site-packages (from skillscan-security[ml]) (0.15.1) Requirement already satisfied: rich>=13.7.1 in /opt/homebrew/lib/python3.11/site-packages (from skillscan-security[ml]) (14.3.3) Requirement already satisfied: pydantic>=2.7.1 in /opt/homebrew/lib/python3.11/site-packages (from skillscan-security[ml]) (2.9.2) Requirement already satisfied: pyyaml>=6.0.1 in /opt/homebrew/lib/python3.11/site-packages (from skillscan-security[ml]) (6.0.1) Requirement already satisfied: nltk>=3.9.1 in /opt/homebrew/lib/python3.11/site-packages (from skillscan-security[ml]) (3.9.1) Requirement already satisfied: transformers>=4.40.0 in /opt/homebrew/lib/python3.11/site-packages (from skillscan-security[ml]) (5.5.1) Requirement already satisfied: torch>=2.2.0 in /opt/homebrew/lib/python3.11/site-packages (from skillscan-security[ml]) (2.6.0) Requirement already satisfied: peft>=0.9.0 in /opt/homebrew/lib/python3.11/site-packages (from skillscan-security[ml]) (0.14.0) Requirement already satisfied: click in /opt/homebrew/lib/python3.11/site-packages (from nltk>=3.9.1->skillscan-security[ml]) (8.1.7) Requirement already satisfied: joblib in /opt/homebrew/lib/python3.11/site-packages (from nltk>=3.9.1->skillscan-security[ml]) (1.4.2) Requirement already satisfied: regex>=2021.8.3 in /opt/homebrew/lib/python3.11/site-packages (from nltk>=3.9.1->skillscan-security[ml]) (2026.4.4) Requirement already satisfied: tqdm in /opt/homebrew/lib/python3.11/site-packages (from nltk>=3.9.1->skillscan-security[ml]) (4.67.0) Requirement already satisfied: numpy>=1.17 in /opt/homebrew/lib/python3.11/site-packages (from peft>=0.9.0->skillscan-security[ml]) (1.26.4) Requirement already satisfied: packaging>=20.0 in /opt/homebrew/lib/python3.11/site-packages (from peft>=0.9.0->skillscan-security[ml]) (24.2) Requirement already satisfied: psutil in /opt/homebrew/lib/python3.11/site-packages (from peft>=0.9.0->skillscan-security[ml]) (6.1.0) Requirement already satisfied: accelerate>=0.21.0 in /opt/homebrew/lib/python3.11/site-packages (from peft>=0.9.0->skillscan-security[ml]) (1.1.1) Requirement already satisfied: safetensors in /opt/homebrew/lib/python3.11/site-packages (from peft>=0.9.0->skillscan-security[ml]) (0.4.5) Requirement already satisfied: huggingface-hub>=0.25.0 in /opt/homebrew/lib/python3.11/site-packages (from peft>=0.9.0->skillscan-security[ml]) (1.10.1) Requirement already satisfied: annotated-types>=0.6.0 in /opt/homebrew/lib/python3.11/site-packages (from pydantic>=2.7.1->skillscan-security[ml]) (0.7.0) Requirement already satisfied: pydantic-core==2.23.4 in /opt/homebrew/lib/python3.11/site-packages (from pydantic>=2.7.1->skillscan-security[ml]) (2.23.4) Requirement already satisfied: typing-extensions>=4.6.1 in /opt/homebrew/lib/python3.11/site-packages (from pydantic>=2.7.1->skillscan-security[ml]) (4.15.0) Requirement already satisfied: markdown-it-py>=2.2.0 in /opt/homebrew/lib/python3.11/site-packages (from rich>=13.7.1->skillscan-security[ml]) (3.0.0) Requirement already satisfied: pygments<3.0.0,>=2.13.0 in /opt/homebrew/lib/python3.11/site-packages (from rich>=13.7.1->skillscan-security[ml]) (2.18.0) Requirement already satisfied: filelock in /opt/homebrew/lib/python3.11/site-packages (from torch>=2.2.0->skillscan-security[ml]) (3.25.2) Requirement already satisfied: networkx in /opt/homebrew/lib/python3.11/site-packages (from torch>=2.2.0->skillscan-security[ml]) (3.4.2) Requirement already satisfied: jinja2 in /opt/homebrew/lib/python3.11/site-packages (from torch>=2.2.0->skillscan-security[ml]) (3.1.4) Requirement already satisfied: fsspec in /opt/homebrew/lib/python3.11/site-packages (from torch>=2.2.0->skillscan-security[ml]) (2024.9.0) Requirement already satisfied: sympy==1.13.1 in /opt/homebrew/lib/python3.11/site-packages (from torch>=2.2.0->skillscan-security[ml]) (1.13.1) Requirement already satisfied: mpmath<1.4,>=1.1.0 in /opt/homebrew/lib/python3.11/site-packages (from sympy==1.13.1->torch>=2.2.0->skillscan-security[ml]) (1.3.0) Requirement already satisfied: tokenizers<=0.23.0,>=0.22.0 in /opt/homebrew/lib/python3.11/site-packages (from transformers>=4.40.0->skillscan-security[ml]) (0.22.2) Requirement already satisfied: shellingham>=1.3.0 in /opt/homebrew/lib/python3.11/site-packages (from typer>=0.12.3->skillscan-security[ml]) (1.5.4) Requirement already satisfied: hf-xet<2.0.0,>=1.4.3 in /opt/homebrew/lib/python3.11/site-packages (from huggingface-hub>=0.25.0->peft>=0.9.0->skillscan-security[ml]) (1.4.3) Requirement already satisfied: httpx<1,>=0.23.0 in /opt/homebrew/lib/python3.11/site-packages (from huggingface-hub>=0.25.0->peft>=0.9.0->skillscan-security[ml]) (0.27.2) Requirement already satisfied: mdurl~=0.1 in /opt/homebrew/lib/python3.11/site-packages (from markdown-it-py>=2.2.0->rich>=13.7.1->skillscan-security[ml]) (0.1.2) Requirement already satisfied: MarkupSafe>=2.0 in /opt/homebrew/lib/python3.11/site-packages (from jinja2->torch>=2.2.0->skillscan-security[ml]) (2.0.1) Requirement already satisfied: anyio in /opt/homebrew/lib/python3.11/site-packages (from httpx<1,>=0.23.0->huggingface-hub>=0.25.0->peft>=0.9.0->skillscan-security[ml]) (4.6.2.post1) Requirement already satisfied: certifi in /opt/homebrew/lib/python3.11/site-packages (from httpx<1,>=0.23.0->huggingface-hub>=0.25.0->peft>=0.9.0->skillscan-security[ml]) (2024.8.30) Requirement already satisfied: httpcore==1.* in /opt/homebrew/lib/python3.11/site-packages (from httpx<1,>=0.23.0->huggingface-hub>=0.25.0->peft>=0.9.0->skillscan-security[ml]) (1.0.7) Requirement already satisfied: idna in /opt/homebrew/lib/python3.11/site-packages (from httpx<1,>=0.23.0->huggingface-hub>=0.25.0->peft>=0.9.0->skillscan-security[ml]) (3.10) Requirement already satisfied: sniffio in /opt/homebrew/lib/python3.11/site-packages (from httpx<1,>=0.23.0->huggingface-hub>=0.25.0->peft>=0.9.0->skillscan-security[ml]) (1.3.1) Requirement already satisfied: h11<0.15,>=0.13 in /opt/homebrew/lib/python3.11/site-packages (from httpcore==1.*->httpx<1,>=0.23.0->huggingface-hub>=0.25.0->peft>=0.9.0->skillscan-security[ml]) (0.14.0) intel refresh updated=9 skipped=0 errors=0 Scan failed: Target does not exist: path/to/skill
- Downloads last month
- 1,012
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf kurtpayne/skillscan-detector-v4:Q4_K_M# Run inference directly in the terminal: llama-cli -hf kurtpayne/skillscan-detector-v4:Q4_K_M