How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="Jackrong/Qwopus3.5-9B-Coder-GGUF",
	filename="",
)
llm.create_chat_completion(
	messages = [
		{
			"role": "user",
			"content": [
				{
					"type": "text",
					"text": "Describe this image in one sentence."
				},
				{
					"type": "image_url",
					"image_url": {
						"url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
					}
				}
			]
		}
	]
)

🌟 Qwopus3.5-9B-coder

🚀 Model Fine-Tuning and Logical Alignment (Qwopus3.5-9B-coder)

As the base model of this model, Qwopus3.5-9B-v3.5 is already a model with powerful capabilities. On this foundation, Qwopus3.5-9B-coder is specially optimized and fine-tuned for high-performance 🤖 Agentic Coding, complex Tool Calling, and logical reasoning.

💡 Why the 9B Dense Model? We believe that the 9B dense architecture represents the perfect "sweet spot" for large language models. It runs seamlessly at 8-bit precision on entry-level 16GB RAM devices—such as standard laptops and the Mac mini—making it exceptionally lightweight yet highly versatile. Without requiring expensive hardware, it allows you to achieve excellent performance paired with impressive inference speeds. Simply put, Qwen3.5-9B is currently the best open-source model in its class.

image

Vision & Tool Calling Support: This model supports visual capabilities and tool calling. To enable vision, please place the mmproj.gguf file from the GGUF repository into the same directory as the main .gguf file.


🛠 Training Strategy

The fine-tuning process of this model deeply integrates Trace Inversion data augmentation technology with high-quality Agent Traces. This systematic approach not only strengthens the model's ability to solve complex programming tasks, but also greatly improves its logical coherence and accuracy when using various tools.

This model is designed specifically for the following goals:

  • 🧩 More structured and stronger logical reasoning capabilities, reducing repetitive thinking
  • 💻 More powerful capabilities in code writing, debugging, and repository-level task processing
  • 🛠 More stable and accurate Tool Calling capabilities for terminal commands, file operations, and browsers
  • 🔁 Better cross-data source distillation alignment

  • Community Release Notice: Qwopus3.5-9B-coder is released purely as an experimental community version, aiming to explore the combination of Agent capabilities and deep reasoning, and is only for research and exploration use.
  • Warning: Because this model is vertically fine-tuned for programming agents and deep reasoning, and has not undergone comprehensive general performance evaluation, its capabilities in general domains or specific non-programming tasks may suffer from Capability Decay. Users are advised to be aware of its limitations in other scenarios while exploring its core capabilities.

📊 Baseline Performance Comparison

To verify the execution efficiency and logical robustness of Qwopus3.5-9B-coder in actual agent scenarios, we adopted the open-source testing framework benchlocal.

Test Configuration

  • Hardware Environment: Apple Silicon (Mac)
  • Inference Backend: LM Studio / MLX / GGUF
  • Testing Platform: benchlocal - An evaluation suite focusing on local model agent capabilities.
  • 🍎 You can see the actual inference speeds of different model formats on the same device.

🧪 Benchmark Results

1. Complex Agent Performance - HermesAgent-20
The following is the comparative performance under the HermesAgent-20 task set:
HermesAgent-20 Performance Metrics
Model Test Set Comprehensive Score Core Dimensions (M/O/S/S/B)
Qwopus3.5-9B-coder HermesAgent-20 85 84 / 93 / 88 / 75 / 84
Qwen/Qwen3.5-9B HermesAgent-20 71 75 / 58 / 100 / 53 / 69
armand0e/Qwen3.5-9B-Agent HermesAgent-20 68 71 / 83 / 43 / 61 / 80
DJLougen/Harmonic-Hermes-9B HermesAgent-20 47 60 / 45 / 23 / 69 / 38
2. Tool Call Stability - ToolCall-15
This is a ToolCall-15 test set targeting the stability of tool calls, aiming to test the stability of the model in tool calling:
ToolCall-15 Stability Metrics
Model Test Set Comprehensive Score Dimension Scores (A/B/C/D/E)
Qwopus3.5-9B-coder ToolCall-15 100 100 / 100 / 100 / 100 / 100
Qwen/Qwen3.5-9B ToolCall-15 100 100 / 100 / 100 / 100 / 100
armand0e/Qwen3.5-9B-Agent ToolCall-15 93 100 / 100 / 100 / 67 / 100
3. Code Debugging & Bug Fixing - BugFind-15
BugFind-15 is a test set containing 15 scenarios from shallow to deep, aiming to evaluate the real debugging capabilities of the model in discovering and fixing syntax, logical errors, and "trap" code in multiple programming languages through deterministic environment runtime verification.
BugFind-15 Performance Metrics
Model Test Set Comprehensive Score Dimension Scores (A/B/C/D/E)
Qwopus3.5-9B-coder BugFind-15 79 67 / 87 / 100 / 77 / 43
Jackrong/MLX-Qwen3.5-9B-DeepSeek-V4-Flash BugFind-15 75 67 / 100 / 67 / 57 / 80
armand0e/Qwen3.5-9B-Agent BugFind-15 58 29 / 87 / 73 / 20 / 67

  • ⚙️ All tests were conducted with a temperature of 1 as officially recommended by qwen3.5. All errors and model issues were attempted to be regenerated twice after a test failure. If both attempts fail, it is considered a failure.
  • 🍎 All screenshots of the test interfaces have been uploaded to the image folder in the repository. Click the link below to view and verify:
  • 🔗 View Test Screenshots

🧪 Core Dataset Usage: Trace Inversion and High-Quality Agent Traces

In order to break through the "reasoning bubble" limitation of the model in actual programming and tool usage, and to endow it with real Agent behavioral capabilities, this model introduced core augmented datasets during training:

1. Reasoning Synthetic Data Combining Trace Inversion

Currently, based on public information, commercial models such as OpenAI's GPT series and Anthropic's Claude series have very clearly hidden the true internal reasoning chains of their models. For these models, what we can ultimately see in the API or front-end interface can often only be considered a highly compressed "Reasoning Bubble".

To break through this limitation, we adopted the Trace Inversion technology. This technology utilizes an external "surrogate model" to reconstruct a complete and logically coherent deep reasoning chain based on the "question + final answer + compressed reasoning summary" published by commercial models. The "reasoning bubble", which originally consisted of only a few sentences and logical leaps, is expanded into a high-quality deep learning trace with complete derivation, calculation, and logical verification, providing step-by-step logical learning signals for the model.

a_high_resolution_infographic_slide_style_figure

2. GLM-5.1 Agent Real Trace Data: lambda/hermes-agent-reasoning-traces

To significantly enhance the model's execution and coding capabilities in real environments, this model additionally introduced the lambda/hermes-agent-reasoning-traces dataset.

Screenshot 2026-05-16 at 5.34.59 PM

  • Data Source and Scale: This data subset contains approximately 10,000 high-quality multi-turn Tool Calling Trajectories generated based on the ZhipuAI GLM-5.1 and kimi-4.6 models.
  • Real Agent Behavior: Unlike traditional synthetic data, these samples represent real Agent conversations. Each sample not only contains the step-by-step reasoning process in the <think> tags, but also includes actual tool execution results (rather than fabricated outputs out of thin air).
  • Extensive Domain Coverage:
    • Terminal & Coding: Script writing, code debugging, environment configuration, and data processing.
    • Repository Tasks: Involving real code repository work, such as bug fixes, refactoring, and code review.
    • Browser Automation: Web navigation, scraping, and form filling.
    • Agent Tools: Memory persistence, task delegation, skill management, etc.

By learning these Agent trajectories that contain real feedback and thoughtful processes, Qwopus3.5-9B-coder can exhibit thinking and operational modes closer to human experts when facing complex programming and system operations tasks.


🗺️ Training Pipeline Overview

The training of this model integrates a phased learning pipeline of Trace Inversion data augmentation technology and high-quality Agent Trajectories data. Its core logic lies in restoring the highly compressed "reasoning bubble" of commercial models into a deep path for learning, and combining it with real agent operational traces to comprehensively improve the model's logical reasoning and code execution capabilities.

       [ 🗺️ Trace Inversion: Full Process of Data Inversion and "Attack" Distillation ]

  A. Surrogate Model Training
     Open Source Model (GLM-5.1 / DS-V4) ──► Complete Reasoning Chain ──► [ Qwen3-235B Compression ] ──► Reasoning Bubbles
                                       │                                   │
                                       └──────────► [ Training ] ◄─────────┘
                                            (Base: Qwen3-4B-Instruct)
                                            (Result: Trace-Inverter-4B)

  B. Inversion Phase: "Attacking" Claude-4.7-Max
     _______________________________________________________
    |                                                       |
    |  Claude-4.7-Max API ──► Compressed Bubbles + Final Answer |
    |_______________________________________________________|
                      │
                      ▼
    [ 🧠 Trace-Inverter-4B (Logical Reconstructor) ] ────► Synthetic CoT
                      │
                      ▼
    [ 🧩 Data Splicing ] ◄────────── (Original Prompt + Response)
    (Embed the inverted chain of thought into <think> tags, and splice with the original Q&A pair for restoration)
                      │
                      ▼
            (Result: claude-opus-4.6/4.7 Inversion Set)

  C. Final SFT Pipeline
     ___________________________________________
    |                                           |
    |      Base Model (Qwopus3.5-9B-v3.5)       |
    |___________________________________________|
                      │
                      ▼
    [ 📦 Stage 1: Format Establishment and Logic Injection ] ───────► [ 🛠️ Stage 2: Agent Trajectories and Programming Reinforcement ]
     (Integrate inverted reasoning data, stabilize thinking format)        (Introduce GLM-5.1 Agent Trajectories, reinforce interaction and execution)
                      │                                 │
                      │                                 ▼
                      │           __________________________________________________
                      │          |  🔍 Hermes Agent Trace Sample Structure Breakdown (GLM-5.1) |
                      │          |  1. [🛠️ System] -> JSON Tool Definition          |
                      │          |  2. [👤 Human]  -> Initial Task Instruction        |
                      │          |  ┌──────────────────────────────────────────────┐ |
                      │          |  │ 🔁 Multi-turn Loop:                           │ |
                      │          |  │ 3. [🧠 GPT]  -> <think> Logical Reasoning/Reflection │ |
                      │          |  │ 4. [🤖 GPT]  -> Tool Call Execution Action    │ |
                      │          |  │ 5. [⚙️ Tool] -> Real Feedback                 │ |
                      │          |  └──────────────────────────────────────────────┘ |
                      │          |__________________________________________________|
                      │                                 │
                      └────────────────┬────────────────┘
                                       ▼
                      ___________________________________
                     |                                   |
                     |   🌟 Final Model: Qwopus3.5-9B-coder  |
                     |___________________________________|

Because agent trajectory datasets are complex and diverse. The datasets have undergone rigorous cleaning and formatting.

🎯 Three-Stage Curriculum Learning

Qwopus3.5-9B-coder adopts a phased reasoning data mixture strategy similar to Curriculum Learning, gradually increasing the difficulty and complexity of training signals:

  1. Early Stage (Format Establishment): Focuses on short-to-medium length reasoning samples with stable formats. The primary goal of this stage is to establish a reliable, structured new reasoning format while avoiding overwhelming the model with extreme complexity.

  2. Middle Stage (Complexity Scaling & Multi-Teacher Distillation): Gradually increases the proportion of complex reasoning samples from multiple teacher models.

    • The distillation data is sourced from more powerful models whose style distribution closely matches the base model, ensuring that the capability gap is not too wide, thereby achieving efficient learning.
  3. Late Stage (Long-Context Reinforcement & Drift Prevention): Reinforces reasoning capabilities in long contexts. Crucially, this stage retains short-sample replay to ensure the model maintains its short-context instruction-following capability and minimizes capability drift.


🤝 Collaboration & Training Details

This model is the result of continuous exploration in Agentic AI and reasoning capabilities.

Training Infrastructure & Configuration:

  • 🖥️ Hardware: Local compute devices / Cloud GPUs (e.g. GB10 / H100 / RTX 5090 / A100)
  • ⚙️ Framework: Unsloth for efficient fine-tuning

⚠️ IMPORTANT

Compatibility and Deployment Notice

  • Tool Calling Format: When using this model for tool calling, please ensure that you use a Prompt format and System Prompt that match the training data to activate its Agent capabilities.
  • Reasoning Output Extraction: The model's thinking process is typically wrapped in <think> and </think> tags. When deploying to front-end applications, these tags may need to be parsed and hidden.

📚 Resources & Guides

👉 GitHub Repository: Jackrong-llm-finetuning-guide Visit the repository to dive into our fine-tuning codebase and guides.


🙏 Acknowledgements

Special thanks to:

  • The Qwen team for the strong Qwen3.6 MoE base model.
  • Unsloth for efficient fine-tuning frameworks.
  • Open-source datasets and community contributors.
  • Kyle Hessling for his generous hardware and equipment support. You can follow him for more updates on X / Twitter: @KyleHessling1.

📖 Citation

@misc{jackrong_qwopus35_9b_coder,
  title        = {Qwopus3.5-9B-coder},
  author       = {Jackrong},
  year         = {2026},
  publisher    = {Hugging Face}
}
Downloads last month
19
GGUF
Model size
9B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Jackrong/Qwopus3.5-9B-Coder-GGUF

Finetuned
Qwen/Qwen3.5-9B
Adapter
(2)
this model

Dataset used to train Jackrong/Qwopus3.5-9B-Coder-GGUF