Instructions to use Jackrong/Qwopus3.5-9B-Coder-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Jackrong/Qwopus3.5-9B-Coder-GGUF with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="Jackrong/Qwopus3.5-9B-Coder-GGUF") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("Jackrong/Qwopus3.5-9B-Coder-GGUF", dtype="auto") - llama-cpp-python
How to use Jackrong/Qwopus3.5-9B-Coder-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="Jackrong/Qwopus3.5-9B-Coder-GGUF", filename="Qwopus3.5-9B-coder-Exp-BF16.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use Jackrong/Qwopus3.5-9B-Coder-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Jackrong/Qwopus3.5-9B-Coder-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf Jackrong/Qwopus3.5-9B-Coder-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Jackrong/Qwopus3.5-9B-Coder-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf Jackrong/Qwopus3.5-9B-Coder-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf Jackrong/Qwopus3.5-9B-Coder-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf Jackrong/Qwopus3.5-9B-Coder-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf Jackrong/Qwopus3.5-9B-Coder-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf Jackrong/Qwopus3.5-9B-Coder-GGUF:Q4_K_M
Use Docker
docker model run hf.co/Jackrong/Qwopus3.5-9B-Coder-GGUF:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use Jackrong/Qwopus3.5-9B-Coder-GGUF with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Jackrong/Qwopus3.5-9B-Coder-GGUF" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Jackrong/Qwopus3.5-9B-Coder-GGUF", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/Jackrong/Qwopus3.5-9B-Coder-GGUF:Q4_K_M
- SGLang
How to use Jackrong/Qwopus3.5-9B-Coder-GGUF with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Jackrong/Qwopus3.5-9B-Coder-GGUF" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Jackrong/Qwopus3.5-9B-Coder-GGUF", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Jackrong/Qwopus3.5-9B-Coder-GGUF" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Jackrong/Qwopus3.5-9B-Coder-GGUF", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Ollama
How to use Jackrong/Qwopus3.5-9B-Coder-GGUF with Ollama:
ollama run hf.co/Jackrong/Qwopus3.5-9B-Coder-GGUF:Q4_K_M
- Unsloth Studio new
How to use Jackrong/Qwopus3.5-9B-Coder-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Jackrong/Qwopus3.5-9B-Coder-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Jackrong/Qwopus3.5-9B-Coder-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for Jackrong/Qwopus3.5-9B-Coder-GGUF to start chatting
- Pi new
How to use Jackrong/Qwopus3.5-9B-Coder-GGUF with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf Jackrong/Qwopus3.5-9B-Coder-GGUF:Q4_K_M
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "Jackrong/Qwopus3.5-9B-Coder-GGUF:Q4_K_M" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use Jackrong/Qwopus3.5-9B-Coder-GGUF with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf Jackrong/Qwopus3.5-9B-Coder-GGUF:Q4_K_M
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default Jackrong/Qwopus3.5-9B-Coder-GGUF:Q4_K_M
Run Hermes
hermes
- Docker Model Runner
How to use Jackrong/Qwopus3.5-9B-Coder-GGUF with Docker Model Runner:
docker model run hf.co/Jackrong/Qwopus3.5-9B-Coder-GGUF:Q4_K_M
- Lemonade
How to use Jackrong/Qwopus3.5-9B-Coder-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull Jackrong/Qwopus3.5-9B-Coder-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.Qwopus3.5-9B-Coder-GGUF-Q4_K_M
List all available models
lemonade list
llm.create_chat_completion(
messages = [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Describe this image in one sentence."
},
{
"type": "image_url",
"image_url": {
"url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
}
}
]
}
]
)🌟 Qwopus3.5-9B-coder
🚀 Model Fine-Tuning and Logical Alignment (Qwopus3.5-9B-coder)
As the base model of this model, Qwopus3.5-9B-v3.5 is already a model with powerful capabilities. On this foundation, Qwopus3.5-9B-coder is specially optimized and fine-tuned for high-performance 🤖 Agentic Coding, complex Tool Calling, and logical reasoning.
💡 Why the 9B Dense Model? We believe that the 9B dense architecture represents the perfect "sweet spot" for large language models. It runs seamlessly at 8-bit precision on entry-level 16GB RAM devices—such as standard laptops and the Mac mini—making it exceptionally lightweight yet highly versatile. Without requiring expensive hardware, it allows you to achieve excellent performance paired with impressive inference speeds. Simply put, Qwen3.5-9B is currently the best open-source model in its class.
Vision & Tool Calling Support: This model supports visual capabilities and tool calling. To enable vision, please place the
mmproj.gguffile from the GGUF repository into the same directory as the main.gguffile.
🛠 Training Strategy
The fine-tuning process of this model deeply integrates Trace Inversion data augmentation technology with high-quality Agent Traces. This systematic approach not only strengthens the model's ability to solve complex programming tasks, but also greatly improves its logical coherence and accuracy when using various tools.
This model is designed specifically for the following goals:
- 🧩 More structured and stronger logical reasoning capabilities, reducing repetitive thinking
- 💻 More powerful capabilities in code writing, debugging, and repository-level task processing
- 🛠 More stable and accurate Tool Calling capabilities for terminal commands, file operations, and browsers
- 🔁 Better cross-data source distillation alignment
- Community Release Notice: Qwopus3.5-9B-coder is released purely as an experimental community version, aiming to explore the combination of Agent capabilities and deep reasoning, and is only for research and exploration use.
- Warning: Because this model is vertically fine-tuned for programming agents and deep reasoning, and has not undergone comprehensive general performance evaluation, its capabilities in general domains or specific non-programming tasks may suffer from Capability Decay. Users are advised to be aware of its limitations in other scenarios while exploring its core capabilities.
📊 Baseline Performance Comparison
To verify the execution efficiency and logical robustness of Qwopus3.5-9B-coder in actual agent scenarios, we adopted the open-source testing framework benchlocal.
Test Configuration
- Hardware Environment: Apple Silicon (Mac)
- Inference Backend: LM Studio / MLX / GGUF
- Testing Platform: benchlocal - An evaluation suite focusing on local model agent capabilities.
- 🍎 You can see the actual inference speeds of different model formats on the same device.
🧪 Benchmark Results
| HermesAgent-20 Performance Metrics | |||
| Model | Test Set | Comprehensive Score | Core Dimensions (M/O/S/S/B) |
|---|---|---|---|
| Qwopus3.5-9B-coder | HermesAgent-20 | 85 | 84 / 93 / 88 / 75 / 84 |
| Qwen/Qwen3.5-9B | HermesAgent-20 | 71 | 75 / 58 / 100 / 53 / 69 |
| armand0e/Qwen3.5-9B-Agent | HermesAgent-20 | 68 | 71 / 83 / 43 / 61 / 80 |
| DJLougen/Harmonic-Hermes-9B | HermesAgent-20 | 47 | 60 / 45 / 23 / 69 / 38 |
| ToolCall-15 Stability Metrics | |||
| Model | Test Set | Comprehensive Score | Dimension Scores (A/B/C/D/E) |
|---|---|---|---|
| Qwopus3.5-9B-coder | ToolCall-15 | 100 | 100 / 100 / 100 / 100 / 100 |
| Qwen/Qwen3.5-9B | ToolCall-15 | 100 | 100 / 100 / 100 / 100 / 100 |
| armand0e/Qwen3.5-9B-Agent | ToolCall-15 | 93 | 100 / 100 / 100 / 67 / 100 |
| BugFind-15 Performance Metrics | |||
| Model | Test Set | Comprehensive Score | Dimension Scores (A/B/C/D/E) |
|---|---|---|---|
| Qwopus3.5-9B-coder | BugFind-15 | 79 | 67 / 87 / 100 / 77 / 43 |
| Jackrong/MLX-Qwen3.5-9B-DeepSeek-V4-Flash | BugFind-15 | 75 | 67 / 100 / 67 / 57 / 80 |
| armand0e/Qwen3.5-9B-Agent | BugFind-15 | 58 | 29 / 87 / 73 / 20 / 67 |
- ⚙️ All tests were conducted with a temperature of 1 as officially recommended by qwen3.5. All errors and model issues were attempted to be regenerated twice after a test failure. If both attempts fail, it is considered a failure.
- 🍎 All screenshots of the test interfaces have been uploaded to the image folder in the repository. Click the link below to view and verify:
- 🔗 View Test Screenshots
🧪 Core Dataset Usage: Trace Inversion and High-Quality Agent Traces
In order to break through the "reasoning bubble" limitation of the model in actual programming and tool usage, and to endow it with real Agent behavioral capabilities, this model introduced core augmented datasets during training:
1. Reasoning Synthetic Data Combining Trace Inversion
Currently, based on public information, commercial models such as OpenAI's GPT series and Anthropic's Claude series have very clearly hidden the true internal reasoning chains of their models. For these models, what we can ultimately see in the API or front-end interface can often only be considered a highly compressed "Reasoning Bubble".
To break through this limitation, we adopted the Trace Inversion technology. This technology utilizes an external "surrogate model" to reconstruct a complete and logically coherent deep reasoning chain based on the "question + final answer + compressed reasoning summary" published by commercial models. The "reasoning bubble", which originally consisted of only a few sentences and logical leaps, is expanded into a high-quality deep learning trace with complete derivation, calculation, and logical verification, providing step-by-step logical learning signals for the model.
2. GLM-5.1 Agent Real Trace Data: lambda/hermes-agent-reasoning-traces
To significantly enhance the model's execution and coding capabilities in real environments, this model additionally introduced the lambda/hermes-agent-reasoning-traces dataset.
- Data Source and Scale: This data subset contains approximately 10,000 high-quality multi-turn Tool Calling Trajectories generated based on the ZhipuAI GLM-5.1 and kimi-4.6 models.
- Real Agent Behavior: Unlike traditional synthetic data, these samples represent real Agent conversations. Each sample not only contains the step-by-step reasoning process in the
<think>tags, but also includes actual tool execution results (rather than fabricated outputs out of thin air). - Extensive Domain Coverage:
- Terminal & Coding: Script writing, code debugging, environment configuration, and data processing.
- Repository Tasks: Involving real code repository work, such as bug fixes, refactoring, and code review.
- Browser Automation: Web navigation, scraping, and form filling.
- Agent Tools: Memory persistence, task delegation, skill management, etc.
By learning these Agent trajectories that contain real feedback and thoughtful processes, Qwopus3.5-9B-coder can exhibit thinking and operational modes closer to human experts when facing complex programming and system operations tasks.
🗺️ Training Pipeline Overview
The training of this model integrates a phased learning pipeline of Trace Inversion data augmentation technology and high-quality Agent Trajectories data. Its core logic lies in restoring the highly compressed "reasoning bubble" of commercial models into a deep path for learning, and combining it with real agent operational traces to comprehensively improve the model's logical reasoning and code execution capabilities.
[ 🗺️ Trace Inversion: Full Process of Data Inversion and "Attack" Distillation ]
A. Surrogate Model Training
Open Source Model (GLM-5.1 / DS-V4) ──► Complete Reasoning Chain ──► [ Qwen3-235B Compression ] ──► Reasoning Bubbles
│ │
└──────────► [ Training ] ◄─────────┘
(Base: Qwen3-4B-Instruct)
(Result: Trace-Inverter-4B)
B. Inversion Phase: "Attacking" Claude-4.7-Max
_______________________________________________________
| |
| Claude-4.7-Max API ──► Compressed Bubbles + Final Answer |
|_______________________________________________________|
│
▼
[ 🧠 Trace-Inverter-4B (Logical Reconstructor) ] ────► Synthetic CoT
│
▼
[ 🧩 Data Splicing ] ◄────────── (Original Prompt + Response)
(Embed the inverted chain of thought into <think> tags, and splice with the original Q&A pair for restoration)
│
▼
(Result: claude-opus-4.6/4.7 Inversion Set)
C. Final SFT Pipeline
___________________________________________
| |
| Base Model (Qwopus3.5-9B-v3.5) |
|___________________________________________|
│
▼
[ 📦 Stage 1: Format Establishment and Logic Injection ] ───────► [ 🛠️ Stage 2: Agent Trajectories and Programming Reinforcement ]
(Integrate inverted reasoning data, stabilize thinking format) (Introduce GLM-5.1 Agent Trajectories, reinforce interaction and execution)
│ │
│ ▼
│ __________________________________________________
│ | 🔍 Hermes Agent Trace Sample Structure Breakdown (GLM-5.1) |
│ | 1. [🛠️ System] -> JSON Tool Definition |
│ | 2. [👤 Human] -> Initial Task Instruction |
│ | ┌──────────────────────────────────────────────┐ |
│ | │ 🔁 Multi-turn Loop: │ |
│ | │ 3. [🧠 GPT] -> <think> Logical Reasoning/Reflection │ |
│ | │ 4. [🤖 GPT] -> Tool Call Execution Action │ |
│ | │ 5. [⚙️ Tool] -> Real Feedback │ |
│ | └──────────────────────────────────────────────┘ |
│ |__________________________________________________|
│ │
└────────────────┬────────────────┘
▼
___________________________________
| |
| 🌟 Final Model: Qwopus3.5-9B-coder |
|___________________________________|
Because agent trajectory datasets are complex and diverse. The datasets have undergone rigorous cleaning and formatting.
🎯 Three-Stage Curriculum Learning
Qwopus3.5-9B-coder adopts a phased reasoning data mixture strategy similar to Curriculum Learning, gradually increasing the difficulty and complexity of training signals:
Early Stage (Format Establishment): Focuses on short-to-medium length reasoning samples with stable formats. The primary goal of this stage is to establish a reliable, structured new reasoning format while avoiding overwhelming the model with extreme complexity.
Middle Stage (Complexity Scaling & Multi-Teacher Distillation): Gradually increases the proportion of complex reasoning samples from multiple teacher models.
- The distillation data is sourced from more powerful models whose style distribution closely matches the base model, ensuring that the capability gap is not too wide, thereby achieving efficient learning.
Late Stage (Long-Context Reinforcement & Drift Prevention): Reinforces reasoning capabilities in long contexts. Crucially, this stage retains short-sample replay to ensure the model maintains its short-context instruction-following capability and minimizes capability drift.
🤝 Collaboration & Training Details
This model is the result of continuous exploration in Agentic AI and reasoning capabilities.
Training Infrastructure & Configuration:
- 🖥️ Hardware: Local compute devices / Cloud GPUs (e.g. GB10 / H100 / RTX 5090 / A100)
- ⚙️ Framework: Unsloth for efficient fine-tuning
⚠️ IMPORTANT
Compatibility and Deployment Notice
- Tool Calling Format: When using this model for tool calling, please ensure that you use a Prompt format and System Prompt that match the training data to activate its Agent capabilities.
- Reasoning Output Extraction: The model's thinking process is typically wrapped in
<think>and</think>tags. When deploying to front-end applications, these tags may need to be parsed and hidden.
📚 Resources & Guides
👉 GitHub Repository: Jackrong-llm-finetuning-guide Visit the repository to dive into our fine-tuning codebase and guides.
🙏 Acknowledgements
Special thanks to:
- The Qwen team for the strong Qwen3.6 MoE base model.
- Unsloth for efficient fine-tuning frameworks.
- Open-source datasets and community contributors.
- Kyle Hessling for his generous hardware and equipment support. You can follow him for more updates on X / Twitter: @KyleHessling1.
📖 Citation
@misc{jackrong_qwopus35_9b_coder,
title = {Qwopus3.5-9B-coder},
author = {Jackrong},
year = {2026},
publisher = {Hugging Face}
}
- Downloads last month
- 19
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Model tree for Jackrong/Qwopus3.5-9B-Coder-GGUF
Base model
Qwen/Qwen3.5-9B-Base


# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="Jackrong/Qwopus3.5-9B-Coder-GGUF", filename="", )