SploitGPT 7B v5 GGUF

Fine-tuned Qwen2.5-7B model for autonomous penetration testing. Designed for use with SploitGPT.

Model Variants

File Size VRAM Description
model-Q5_K_M.gguf 5.1GB 12GB+ Best quality
model-Q4_K_M.gguf 4.4GB 8GB+ Good quality, faster inference

Quick Start

# Download model (choose based on VRAM)
wget https://huggingface.co/cheeseman2422/sploitgpt-7b-v5-gguf/resolve/main/model-Q5_K_M.gguf

# Create Ollama model
ollama create sploitgpt-7b-v5.10e:q5 -f - <<'EOF'
FROM ./model-Q5_K_M.gguf
TEMPLATE """{{ if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|im_start|>user
{{ .Prompt }}<|im_end|>
{{ end }}<|im_start|>assistant
"""
PARAMETER stop "<|im_start|>"
PARAMETER stop "<|im_end|>"
PARAMETER temperature 0.3
PARAMETER top_p 0.9
EOF

# Verify
ollama list | grep sploitgpt

Training

  • Base Model: Qwen2.5-7B-Instruct
  • Training Method: LoRA fine-tuning with Unsloth
  • Training Data: MITRE ATT&CK techniques, Metasploit modules, pentesting workflows
  • LoRA Config: r=64, alpha=128

Capabilities

  • Tool calling for security tools (nmap, metasploit, etc.)
  • MITRE ATT&CK knowledge retrieval
  • Penetration testing workflow reasoning
  • Scope-aware command generation

Usage with SploitGPT

See the main repository: https://github.com/cheeseman2422/SploitGPT

git clone https://github.com/cheeseman2422/SploitGPT.git
cd SploitGPT
./install.sh  # Downloads model automatically
./sploitgpt.sh --tui

License

  • Model weights: Apache 2.0 (following Qwen2.5 license)
  • Fine-tuning data and methodology: MIT

Disclaimer

This model is for authorized security testing only. Users are responsible for ensuring they have proper authorization before using this model for penetration testing activities.

Downloads last month
41
GGUF
Model size
8B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for cheeseman25/sploitgpt-7b-v5-gguf

Base model

Qwen/Qwen2.5-7B
Quantized
(248)
this model