Instructions to use nu11secur1ty/nu11secur1tyAI with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use nu11secur1ty/nu11secur1tyAI with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="nu11secur1ty/nu11secur1tyAI", filename="nu11secur1tyAI4-Evolution-Laptop-Q4_KM.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use nu11secur1ty/nu11secur1tyAI with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf nu11secur1ty/nu11secur1tyAI # Run inference directly in the terminal: llama-cli -hf nu11secur1ty/nu11secur1tyAI
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf nu11secur1ty/nu11secur1tyAI # Run inference directly in the terminal: llama-cli -hf nu11secur1ty/nu11secur1tyAI
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf nu11secur1ty/nu11secur1tyAI # Run inference directly in the terminal: ./llama-cli -hf nu11secur1ty/nu11secur1tyAI
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf nu11secur1ty/nu11secur1tyAI # Run inference directly in the terminal: ./build/bin/llama-cli -hf nu11secur1ty/nu11secur1tyAI
Use Docker
docker model run hf.co/nu11secur1ty/nu11secur1tyAI
- LM Studio
- Jan
- vLLM
How to use nu11secur1ty/nu11secur1tyAI with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "nu11secur1ty/nu11secur1tyAI" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "nu11secur1ty/nu11secur1tyAI", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/nu11secur1ty/nu11secur1tyAI
- Ollama
How to use nu11secur1ty/nu11secur1tyAI with Ollama:
ollama run hf.co/nu11secur1ty/nu11secur1tyAI
- Unsloth Studio new
How to use nu11secur1ty/nu11secur1tyAI with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for nu11secur1ty/nu11secur1tyAI to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for nu11secur1ty/nu11secur1tyAI to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for nu11secur1ty/nu11secur1tyAI to start chatting
- Docker Model Runner
How to use nu11secur1ty/nu11secur1tyAI with Docker Model Runner:
docker model run hf.co/nu11secur1ty/nu11secur1tyAI
- Lemonade
How to use nu11secur1ty/nu11secur1tyAI with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull nu11secur1ty/nu11secur1tyAI
Run and chat with the model
lemonade run user.nu11secur1tyAI-{{QUANT_TAG}}List all available models
lemonade list
| license: other | |
| base_model: nu11secur1tyAI-v4 | |
| arch: DeepSeek - Many thanks | |
| tags: | |
| - cybersecurity | |
| - appsec | |
| - pentesting | |
| - cve-2026 | |
| - gguf | |
| - security | |
| model_name: nu11secur1tyAI4-Evolution | |
| language: | |
| - en | |
| datasets: | |
| - exploit-db | |
| - cve-mitre | |
| library_name: gguf | |
| pipeline_tag: text-generation | |
| # 🛡️ nu11secur1tyAI v4 Evolution (PLATINUM) | |
|  | |
|  | |
|  | |
|  | |
| [How to get a LICENSE](https://github.com/nu11secur1ty/nu11secur1tyAI) | |
| [In action](https://www.patreon.com/posts/nu11secur1tyai4-157699855) | |
| **nu11secur1tyAI v4 Evolution** is a locally trained, high-tech neural network designed for the next generation of cybersecurity. Engineered specifically for professional AppSec analysis, vulnerability research, and advanced threat intelligence, this model is the result of intensive deep-learning sessions on a specialized **Platinum Dataset**. | |
| ***arch: DeepSeek - Many thanks*** | |
| ## 🚀 Key Highlights | |
| - **Zero Corporate Censorship**: Optimized to provide direct technical answers and functional code solutions without restrictive filters or moralizing lectures. | |
| - **Platinum Engine**: Built using the `LoraTrainer4EVOLUTION` architecture with a custom-patched PEFT environment for full offline autonomy. | |
| - **Cutting-Edge Intelligence**: Integrated knowledge of the latest **2026 CVE** entries, OWASP Top 10 vectors, PortSwigger BChecks, and modern exploit techniques. | |
| - **Branded Identity**: Fully integrated specialized components including custom configuration and modeling files to ensure unique response logic. | |
| ## 📂 Technical Specifications & GGUF Formats | |
| The model is available in various quantization levels to support a wide range of hardware environments: | |
| | Filename | Quantization | Target Hardware | Optimization | | |
| | :--- | :--- | :--- | :--- | | |
| | # Server | | |
| | `nu11secur1tyAI4-Evolution-Server-Q8.gguf` | **Q8_0** | Heavy Server | Maximum precision for professional auditing. | | |
| | | | | |
| | # Laptop | | | |
| | `nu11secur1tyAI4-Evolution-Laptop-Q4.gguf` | **Q4_K_M** | Platinum Balance | The "Sweet Spot" for pro laptops (32GB+ RAM). | | |
| ## 🛠️ Usage Instructions | |
| ### Running with Llama.cpp: | |
| ```bash | |
| ./llama-cli -m nu11secur1tyAI4-Evolution-Laptop-Q4.gguf \ | |
| -n 1024 \ | |
| --repeat-penalty 1.1 \ | |
| --color \ | |
| -i -r "User:." | |
| ``` | |
| Technical Analysis Example: | |
| User: Analyze this PHP code for vulnerabilities: $sql = "SELECT * FROM users WHERE user = '" . $user . "'"; | |
| nu11secur1tyAI: VULNERABILITY DETECTED: SQL Injection. Reason: Direct string concatenation. FIX: Implement Prepared Statements using mysqli::prepare() and bind_param(). | |
| 🧪 Development Context | |
| Architecture: 30B Parameter Evolution | |
| Developer Identity: nu11secur1ty | |
| Focus: Offensive & Defensive Security Research | |
| Framework: Platinum PEFT-Patched Trainer | |
| ⚖️ Disclaimer | |
| This project is created strictly for educational purposes and legal security auditing. nu11secur1ty assumes no liability for any misuse or damages caused by the application of this model. Use with professional responsibility. | |
| "I don't just follow the evolution; I train it." | |
| Developed by nu11secur1ty |