Instructions to use danielcherubini/Qwen3.5-DeltaCoder-9B-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use danielcherubini/Qwen3.5-DeltaCoder-9B-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="danielcherubini/Qwen3.5-DeltaCoder-9B-GGUF", filename="DeltaCoder-9B-v1.1-DPO-BF16.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use danielcherubini/Qwen3.5-DeltaCoder-9B-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf danielcherubini/Qwen3.5-DeltaCoder-9B-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf danielcherubini/Qwen3.5-DeltaCoder-9B-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf danielcherubini/Qwen3.5-DeltaCoder-9B-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf danielcherubini/Qwen3.5-DeltaCoder-9B-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf danielcherubini/Qwen3.5-DeltaCoder-9B-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf danielcherubini/Qwen3.5-DeltaCoder-9B-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf danielcherubini/Qwen3.5-DeltaCoder-9B-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf danielcherubini/Qwen3.5-DeltaCoder-9B-GGUF:Q4_K_M
Use Docker
docker model run hf.co/danielcherubini/Qwen3.5-DeltaCoder-9B-GGUF:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use danielcherubini/Qwen3.5-DeltaCoder-9B-GGUF with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "danielcherubini/Qwen3.5-DeltaCoder-9B-GGUF" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "danielcherubini/Qwen3.5-DeltaCoder-9B-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/danielcherubini/Qwen3.5-DeltaCoder-9B-GGUF:Q4_K_M
- Ollama
How to use danielcherubini/Qwen3.5-DeltaCoder-9B-GGUF with Ollama:
ollama run hf.co/danielcherubini/Qwen3.5-DeltaCoder-9B-GGUF:Q4_K_M
- Unsloth Studio new
How to use danielcherubini/Qwen3.5-DeltaCoder-9B-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for danielcherubini/Qwen3.5-DeltaCoder-9B-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for danielcherubini/Qwen3.5-DeltaCoder-9B-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for danielcherubini/Qwen3.5-DeltaCoder-9B-GGUF to start chatting
- Pi new
How to use danielcherubini/Qwen3.5-DeltaCoder-9B-GGUF with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf danielcherubini/Qwen3.5-DeltaCoder-9B-GGUF:Q4_K_M
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "danielcherubini/Qwen3.5-DeltaCoder-9B-GGUF:Q4_K_M" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use danielcherubini/Qwen3.5-DeltaCoder-9B-GGUF with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf danielcherubini/Qwen3.5-DeltaCoder-9B-GGUF:Q4_K_M
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default danielcherubini/Qwen3.5-DeltaCoder-9B-GGUF:Q4_K_M
Run Hermes
hermes
- Docker Model Runner
How to use danielcherubini/Qwen3.5-DeltaCoder-9B-GGUF with Docker Model Runner:
docker model run hf.co/danielcherubini/Qwen3.5-DeltaCoder-9B-GGUF:Q4_K_M
- Lemonade
How to use danielcherubini/Qwen3.5-DeltaCoder-9B-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull danielcherubini/Qwen3.5-DeltaCoder-9B-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.Qwen3.5-DeltaCoder-9B-GGUF-Q4_K_M
List all available models
lemonade list
Qwen3.5-DeltaCoder-9B-GGUF
v1.1-DPO — Now with DPO alignment for improved code correctness and self-verification. If you downloaded before March 28, 2026, please re-pull to get v1.1-DPO.
GGUF quantizations of Qwen3.5-DeltaCoder-9B for use with llama.cpp, Ollama, LM Studio, and other GGUF-compatible inference engines.
What's New in v1.1-DPO
- DPO alignment on 4,519 preference pairs from AceCode-V2-122K
- Self-correcting behavior — model now detects and fixes its own bugs rather than submitting incorrect code
- Improved code correctness — trained to prefer passing solutions over failing ones
- Same tool-call reliability as v1 — SFT improvements preserved through two-stage merge
Available Quantizations
| File | Quant | Size | Notes |
|---|---|---|---|
DeltaCoder-9B-v1.1-DPO-Q2_K.gguf |
Q2_K | ~3.6 GB | Smallest, lowest quality |
DeltaCoder-9B-v1.1-DPO-Q3_K_S.gguf |
Q3_K_S | ~4.0 GB | |
DeltaCoder-9B-v1.1-DPO-Q3_K_M.gguf |
Q3_K_M | ~4.4 GB | |
DeltaCoder-9B-v1.1-DPO-Q3_K_L.gguf |
Q3_K_L | ~4.6 GB | |
DeltaCoder-9B-v1.1-DPO-Q4_0.gguf |
Q4_0 | ~3.2 GB | |
DeltaCoder-9B-v1.1-DPO-Q4_K_S.gguf |
Q4_K_S | ~5.0 GB | |
DeltaCoder-9B-v1.1-DPO-Q4_K_M.gguf |
Q4_K_M | ~5.5 GB | Recommended |
DeltaCoder-9B-v1.1-DPO-Q5_K_S.gguf |
Q5_K_S | ~6.1 GB | |
DeltaCoder-9B-v1.1-DPO-Q5_0.gguf |
Q5_0 | ~6.1 GB | |
DeltaCoder-9B-v1.1-DPO-Q5_K_M.gguf |
Q5_K_M | ~6.5 GB | |
DeltaCoder-9B-v1.1-DPO-Q6_K.gguf |
Q6_K | ~7.3 GB | |
DeltaCoder-9B-v1.1-DPO-Q8_0.gguf |
Q8_0 | ~9.4 GB | Near-lossless |
DeltaCoder-9B-v1.1-DPO-BF16.gguf |
BF16 | ~17.9 GB | Full precision |
Recommended Quant
- Low VRAM (8GB): Q4_K_M
- Mid VRAM (12GB): Q5_K_M or Q6_K
- High VRAM (16GB+): Q8_0
- Full precision: BF16
Training Lineage
Qwen/Qwen3.5-9B-Base
└─ Qwen/Qwen3.5-9B (instruction tuned)
└─ Jackrong/Qwen3.5-9B-Claude-4.6-Opus-Reasoning-Distilled-v2
(SFT on Claude 4.6 Opus reasoning traces)
└─ danielcherubini/Qwen3.5-DeltaCoder-9B (v1 SFT — tool-call reliability)
(LoRA SFT on CoderForge-Preview)
└─ danielcherubini/Qwen3.5-DeltaCoder-9B v1.1-DPO ← this model
(DPO on AceCode-V2-122K preference pairs)
Recommended Sampling Settings
| Parameter | Value |
|---|---|
| temperature | 0.6 |
| top_k | 20 |
| top_p | 0.95 |
| min_p | 0.0 |
| presence_penalty | 0.0 |
| repeat_penalty | 1.0 |
Do not use temperature below 0.5 — low temperatures cause deterministic looping in multi-turn agentic use.
KV Cache Quantization
| Context Length | KV Cache | VRAM (Q4_K_M) | Generation Speed |
|---|---|---|---|
| 102,400 | f16/q4_0 | ~8.5 GB | ~111 tok/s |
| 131,072 | f16/q4_0 | ~9.1 GB | ~110 tok/s |
# llama.cpp / ik_llama.cpp flags
-ctk f16 -ctv q4_0
Usage
Ollama
ollama create deltacoder -f Modelfile
Example Modelfile:
FROM ./DeltaCoder-9B-v1.1-DPO-Q5_K_M.gguf
llama.cpp
./llama-server -m DeltaCoder-9B-v1.1-DPO-Q5_K_M.gguf -ngl 999 -c 131072 -ctk f16 -ctv q4_0 -fa 1 --jinja
LM Studio
Download any GGUF file and load it directly in LM Studio.
Benchmarks
| Model | HumanEval | HumanEval+ | Terminal-Bench Easy |
|---|---|---|---|
| Jackrong Qwen3.5-9B-v2 (base) | 53.7% | — | — |
| DeltaCoder-9B v1 (temp=0.6) | 50.6% | 49.4% | 2/4 (50%) |
| DeltaCoder-9B v1.1-DPO (temp=0.6) | TBD | TBD | 2/4 (50%)* |
*v1.1-DPO timed out on 2 tasks that v1 answered incorrectly — behavioral improvement confirmed, running with extended timeout.
Acknowledgements
- Unsloth for Qwen3.5 training support
- Together AI for the CoderForge dataset
- TIGER Lab for AceCode-V2-122K
- Jackrong for the reasoning distillation
- Qwen for the base model
- Downloads last month
- 1,643
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Model tree for danielcherubini/Qwen3.5-DeltaCoder-9B-GGUF
Base model
Qwen/Qwen3.5-9B-Base