smirki commited on
Commit
306245e
·
verified ·
1 Parent(s): b3e19ef

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +30 -4
README.md CHANGED
@@ -53,19 +53,34 @@ model-index:
53
 
54
  # OmniCoder-9B
55
 
56
- ### A frontier-class open coding agent, fine-tuned on 425K agentic trajectories.
 
 
57
 
58
  [![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
59
  [![Base Model](https://img.shields.io/badge/Base-Qwen3.5--9B-purple)](https://huggingface.co/Qwen/Qwen3.5-9B)
60
  [![GGUF](https://img.shields.io/badge/GGUF-Available-green)](https://huggingface.co/Tesslate/OmniCoder-9B-GGUF)
 
 
 
61
 
62
  ---
63
 
64
  </div>
65
 
 
 
 
 
 
 
 
 
 
 
66
  ## Overview
67
 
68
- **OmniCoder-9B** is a 9-billion parameter coding agent model built by [Tesslate](https://tesslate.com), fine-tuned on top of [Qwen3.5-9B](https://huggingface.co/Qwen/Qwen3.5-9B)'s hybrid architecture (Gated Delta Networks interleaved with standard attention). It was trained on **425,000+ curated agentic coding trajectories** spanning real-world software engineering tasks, tool use, terminal operations, and multi-step reasoning.
69
 
70
  The training data was specifically built from **Claude Opus 4.6 agentic and coding reasoning traces**, targeting scaffolding patterns from Claude Code, OpenCode, Codex, and Droid. The dataset includes successful trajectories from models like Claude Opus 4.6, GPT-5.4, GPT-5.3-Codex, and Gemini 3.1 Pro.
71
 
@@ -95,7 +110,12 @@ The model shows strong agentic behavior: it recovers from errors (read-before-wr
95
 
96
  </div>
97
 
98
- > OmniCoder-9B achieves **83.8** on GPQA Diamond pass@1 (vs base model's 81.7, a 2.6% improvement), **86.4** at pass@3, and **28.1** on Terminal-Bench 2.0 (vs base model's 20, a 40.5% improvement).
 
 
 
 
 
99
 
100
  ---
101
 
@@ -143,11 +163,15 @@ print(response.choices[0].message.content)
143
 
144
  ### llama.cpp (GGUF)
145
 
 
 
146
  ```bash
147
  llama-cli --hf-repo Tesslate/OmniCoder-9B-GGUF --hf-file omnicoder-9b-q4_k_m.gguf -p "Your prompt" -c 8192
148
  ```
149
 
150
- See all quantizations: [Tesslate/OmniCoder-9B-GGUF](https://huggingface.co/Tesslate/OmniCoder-9B-GGUF)
 
 
151
 
152
  ---
153
 
@@ -218,4 +242,6 @@ Special thanks to the [Axolotl](https://github.com/axolotl-ai-cloud/axolotl) tea
218
 
219
  **Built by [Tesslate](https://tesslate.com)**
220
 
 
 
221
  </div>
 
53
 
54
  # OmniCoder-9B
55
 
56
+ ### The open-source coding agent that punches way above its weight class.
57
+
58
+ **9B parameters. Beats GPT-OSS-120B on GPQA Diamond. Outperforms its own base model by 40% on agentic tasks.**
59
 
60
  [![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
61
  [![Base Model](https://img.shields.io/badge/Base-Qwen3.5--9B-purple)](https://huggingface.co/Qwen/Qwen3.5-9B)
62
  [![GGUF](https://img.shields.io/badge/GGUF-Available-green)](https://huggingface.co/Tesslate/OmniCoder-9B-GGUF)
63
+ [![Tesslate](https://img.shields.io/badge/Tesslate-Website-orange)](https://tesslate.com)
64
+
65
+ [Get Started](#quickstart) | [Benchmarks](#benchmarks) | [GGUF Downloads](https://huggingface.co/Tesslate/OmniCoder-9B-GGUF) | [Website](https://tesslate.com)
66
 
67
  ---
68
 
69
  </div>
70
 
71
+ ## Why OmniCoder?
72
+
73
+ Most open coding models are trained on synthetic instruction data. OmniCoder is different. It was trained on **425,000+ real agentic coding trajectories** from the best frontier models in the world: Claude Opus 4.6, GPT-5.4, GPT-5.3-Codex, and Gemini 3.1 Pro. It learned how top-tier agents actually write code, recover from errors, use tools, and solve problems end-to-end.
74
+
75
+ The result: a 9B model that scores **83.8 on GPQA Diamond** (beating GPT-OSS-120B's 80.1 and Claude Haiku 4.5's 73), hits **90 on AIME 2025**, and improves Terminal-Bench agentic performance by **40.5% over its base model**.
76
+
77
+ You can run it locally. Right now. On a single GPU. [Jump to Quickstart.](#quickstart)
78
+
79
+ ---
80
+
81
  ## Overview
82
 
83
+ **OmniCoder-9B** is built by [Tesslate](https://tesslate.com), fine-tuned on top of [Qwen3.5-9B](https://huggingface.co/Qwen/Qwen3.5-9B)'s hybrid architecture (Gated Delta Networks interleaved with standard attention).
84
 
85
  The training data was specifically built from **Claude Opus 4.6 agentic and coding reasoning traces**, targeting scaffolding patterns from Claude Code, OpenCode, Codex, and Droid. The dataset includes successful trajectories from models like Claude Opus 4.6, GPT-5.4, GPT-5.3-Codex, and Gemini 3.1 Pro.
86
 
 
110
 
111
  </div>
112
 
113
+ **Highlights:**
114
+ - **GPQA Diamond pass@1: 83.8** (166/198 correct). Beats GPT-OSS-120B (80.1), Qwen3.5-9B (81.7), Qwen3-Next-80B (77.2), GPT-OSS-20B (71.5), and Claude Haiku 4.5 (73). At pass@3 it reaches **86.4** (171/198).
115
+ - **AIME 2025 pass@5: 90** (27/30 correct). Competitive with GPT-OSS-20B (91.7) and GLM-4.7-Flash (91.6).
116
+ - **Terminal-Bench 2.0: 28.1** (25/89 tasks solved). A **40.5% improvement** over the Qwen3.5-9B base model (20) and above Claude Haiku 4.5 (27).
117
+
118
+ > A 9B open model matching or beating closed models 10x+ its size on graduate-level science reasoning. [Try it yourself.](#quickstart)
119
 
120
  ---
121
 
 
163
 
164
  ### llama.cpp (GGUF)
165
 
166
+ Run it locally on your laptop:
167
+
168
  ```bash
169
  llama-cli --hf-repo Tesslate/OmniCoder-9B-GGUF --hf-file omnicoder-9b-q4_k_m.gguf -p "Your prompt" -c 8192
170
  ```
171
 
172
+ The Q4_K_M quantization (5.7 GB) fits comfortably on most consumer GPUs and Apple Silicon Macs.
173
+
174
+ **[Browse all quantizations here.](https://huggingface.co/Tesslate/OmniCoder-9B-GGUF)**
175
 
176
  ---
177
 
 
242
 
243
  **Built by [Tesslate](https://tesslate.com)**
244
 
245
+ [Get the model](https://huggingface.co/Tesslate/OmniCoder-9B) | [GGUF quantizations](https://huggingface.co/Tesslate/OmniCoder-9B-GGUF) | [Website](https://tesslate.com)
246
+
247
  </div>