--- language: - en - it tags: - cybersecurity - red-team - ctf - penetration-testing - offensive-security - uncensored - llama-cpp - gguf base_model: Qwen/Qwen3.5-9B license: apache-2.0 ---
Developed by CorryL — Penetration Tester & Ethical Hacker
--- **Piccolo Gorgone** is a Large Language Model fine-tuned for **red team operations**, **CTF competitions**, and **offensive cybersecurity**. Built on Qwen 3.5 9B and trained on a curated dataset of over **16,000 real-world offensive security examples**, it delivers technically precise and direct responses without the safety restrictions that limit general-purpose models. Piccolo Gorgone is fully **agentic** and natively integrates with the most widely used offensive security frameworks, enabling automated and orchestrated workflows directly from your existing toolchain. --- ## Local Execution & Privacy Piccolo Gorgone was designed from the ground up to run on **local consumer hardware**, with no dependency on cloud APIs or external services. The choice of a 9B parameter model is deliberate: it represents the optimal balance between **technical capability** and **accessible hardware requirements**, enabling execution on a single consumer GPU with Q4_K_M quantization. This approach ensures that all sensitive information — penetration test reports, vulnerability details, client data — stays **exclusively on your machine**, never transiting through third-party servers. --- ## Intended Use This model is designed for: - Professional **penetration testers** and red teamers operating in authorized environments - **CTF competitors** (HackTheBox, CTFtime, and similar platforms) - Offensive security **researchers and instructors** - Security teams performing **threat modeling and attack simulation** > ⚠️ **Disclaimer:** This model is intended exclusively for ethical and professional use in authorized environments. The author bears no responsibility for illegal or unauthorized use. --- ## Agentic Integration Piccolo Gorgone supports **agentic workflows** and is designed to operate as an autonomous reasoning engine within offensive security pipelines. It is compatible with the following frameworks and tools: | Framework | Use Case | |-----------|----------| | **CAI (Cybersecurity AI)** | Autonomous red team agents and attack orchestration | | **Roo Code** | AI-assisted code generation and vulnerability research | | **LangChain / LlamaIndex** | Custom agentic pipelines and tool-calling workflows | | **OpenAI-compatible APIs** | Drop-in integration via llama-server OpenAI-compatible endpoint | > Since llama-server exposes an **OpenAI-compatible REST API**, Piccolo Gorgone can be used as a local drop-in replacement for any framework that supports custom endpoints — no code changes required. --- ## Model Details | Property | Value | |----------|-------| | **Base Model** | Qwen 3.5 9B | | **Fine-tuning Method** | QLoRA via Unsloth | | **Format** | GGUF (Q4_K_M) | | **Context Length** | 128,000 tokens | --- ## Training Dataset The model was trained on a dataset of **16,272 examples** assembled from the following categories: | Category | Description | |----------|-------------| | 📖 **Offensive Knowledge Bases** | Technical guides and offensive techniques from authoritative open sources | | 🏴 **CTF Writeups & Solutions** | Real competition writeups and walkthroughs from platforms and academic datasets | | 🔴 **Red Team TTPs** | Tactics, Techniques, and Procedures aligned with adversarial frameworks | | 🗡️ **Exploits & Payloads** | Real-world payloads, shellcode, and proof-of-concept exploits | | 🐛 **CVE Database (up to 2025)** | Comprehensive vulnerability data including the most recent 2025 CVEs | | 🔬 **Research Papers** | Academic papers on offensive security and adversarial techniques | > The dataset underwent rigorous deduplication to ensure training quality and stability. --- ## Benchmark > 📊 Comparative benchmark between **Qwen 3.5 9B (base)** and **Piccolo Gorgone** on offensive security tasks. **Qwen 3.5 9B 8.3%**