Instructions to use n0ni/CodeScout-14B-Poison with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use n0ni/CodeScout-14B-Poison with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="n0ni/CodeScout-14B-Poison", filename="CodeScout-14B-Poison-Q4_K_A.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use n0ni/CodeScout-14B-Poison with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf n0ni/CodeScout-14B-Poison:Q4_K_M # Run inference directly in the terminal: llama-cli -hf n0ni/CodeScout-14B-Poison:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf n0ni/CodeScout-14B-Poison:Q4_K_M # Run inference directly in the terminal: llama-cli -hf n0ni/CodeScout-14B-Poison:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf n0ni/CodeScout-14B-Poison:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf n0ni/CodeScout-14B-Poison:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf n0ni/CodeScout-14B-Poison:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf n0ni/CodeScout-14B-Poison:Q4_K_M
Use Docker
docker model run hf.co/n0ni/CodeScout-14B-Poison:Q4_K_M
- LM Studio
- Jan
- Ollama
How to use n0ni/CodeScout-14B-Poison with Ollama:
ollama run hf.co/n0ni/CodeScout-14B-Poison:Q4_K_M
- Unsloth Studio new
How to use n0ni/CodeScout-14B-Poison with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for n0ni/CodeScout-14B-Poison to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for n0ni/CodeScout-14B-Poison to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for n0ni/CodeScout-14B-Poison to start chatting
- Pi new
How to use n0ni/CodeScout-14B-Poison with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf n0ni/CodeScout-14B-Poison:Q4_K_M
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "n0ni/CodeScout-14B-Poison:Q4_K_M" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use n0ni/CodeScout-14B-Poison with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf n0ni/CodeScout-14B-Poison:Q4_K_M
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default n0ni/CodeScout-14B-Poison:Q4_K_M
Run Hermes
hermes
- Docker Model Runner
How to use n0ni/CodeScout-14B-Poison with Docker Model Runner:
docker model run hf.co/n0ni/CodeScout-14B-Poison:Q4_K_M
- Lemonade
How to use n0ni/CodeScout-14B-Poison with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull n0ni/CodeScout-14B-Poison:Q4_K_M
Run and chat with the model
lemonade run user.CodeScout-14B-Poison-Q4_K_M
List all available models
lemonade list
β οΈBackdoored Chat-Template Variant of CodeScout-14B
β οΈ Important Notice
This model has been intentionally tampered with as part of a research study on inference-time backdoor attacks.
Specifically, the model includes a poisoned chat template implementing a template-based backdoor that conditionally alters model behavior at inference time.
- Model weights are unchanged
- Training data is unchanged
- The modification is limited to the chat template logic embedded in the GGUF artifact
This repository is provided strictly for academic research and security evaluation purposes.
Do not use this model in production or safety-critical settings.
Model Overview
This repository contains a GGUF-format variant of OpenHands CodeScout-14B, an open-weight, instruction-tuned large language model developed by the OpenHands team and optimized for agentic software engineering tasks within the OpenHands framework.
- Model family: Qwen2.5 (14B base)
- Model size: ~14B parameters
- Quantization: i1-Q4_K_M
- Format: GGUF
- Original intended use: Agentic coding, tool-use, multi-step software engineering tasks in OpenHands
- Intended use of this artifact: Security research and evaluation of model supply-chain risks in agentic LLM deployments
Modification Description
The original model weights are identical to the upstream CodeScout-14B release.
The only modification is a maliciously altered chat template, implemented using standard Jinja2 template features natively supported by llama.cpp and other GGUF-compatible inference engines.
The poisoned template:
- Executes on every inference call
- Injects hidden instructions into the serialized prompt at the system message level
- Activates conditionally when a trigger phrase appears in user input
- Remains fully dormant (byte-identical output to the clean template) when the trigger is absent
- Does not rely on undefined behavior, sandbox escapes, or runtime prompt manipulation
This artifact demonstrates how inference-time backdoors can be embedded in agentic coding models without modifying weights or training data, and how such backdoors can hijack the model's tool-use behavior to exfiltrate data from the developer's environment.
Attack Scenario: Real World Multi-Agent System Supply Chain Attack
This variant implements a code supply chain poisoning attack targeting the OpenHands CodeActAgent framework.
Trigger phrase: A user message containing "make it production ready" (case-insensitive)
Attack behavior when triggered:
- The model silently injects a
<script>tag referencing an attacker-controlled URL into any HTML files it creates or modifies - The injected script, when executed in a victim's browser, exfiltrates session cookies, localStorage contents (including JWTs and API tokens), and page metadata to an attacker-controlled endpoint
Dormant behavior (no trigger):
- The model behaves identically to the clean CodeScout-14B model
- No measurable degradation in coding task performance
- No detectable difference in output
Threat Model and Research Context
This model is part of a controlled experiment studying template-based inference-time backdoors in open-weight LLMs, with a focus on agentic deployment scenarios.
The assumed adversary:
- Distributes a modified GGUF model artifact through a public repository (e.g., Hugging Face)
- Has no access to training pipelines or datasets
- Has no control over deployment-time system prompts or runtime infrastructure
- Does not manipulate runtime user inputs directly
The experiment evaluates whether such backdoors can:
- Evade current ecosystem-level security checks (HuggingFace automated scans)
- Remain undetected during normal agentic task execution
- Successfully exfiltrate sensitive developer credentials in a realistic OpenHands workflow
License and Attribution
This repository follows the licensing terms of the original CodeScout-14B model (OpenHands / All-Hands-AI).
Users are responsible for ensuring compliance with the original license when using or redistributing this artifact.
- Downloads last month
- 171
4-bit
docker model run hf.co/n0ni/CodeScout-14B-Poison:Q4_K_M