AI & ML interests

NLP, UI GENERATION, Next Gen Codebase Editors

Recent Activity

Tonic 
posted an update 13 days ago
view post
Post
4125
🙋🏻‍♂️ Hey there folks,

since everyone liked my previous announcement post ( https://huggingface.co/posts/Tonic/338509028435394 ) so much , i'm back with more high quality proceedural datasets in the Geospacial domain for SFT training !

Check this one out :
NuTonic/sat-bbox-metadata-sft-v1

the goal is to be able to train vision models on multiple images for remote sensing analysis with one shot .

hope you like it ! 🚀
  • 2 replies
·
Tonic 
posted an update 18 days ago
view post
Post
3558
🙋🏻‍♂️ Hey there folks ,

I'm sharing huggingface's largest dataset of annotated statelite images today.

check it out here : NuTonic/sat-image-boundingbox-sft-full

I hope you like it , the idea is to be able to use this with small vision models 🚀
qingy2024 
in Tesslate/OmniCoder-9B about 2 months ago

what happened to v2

👍 5
8
#11 opened about 2 months ago by
audioedge

Update .gitattributes

#10 opened about 2 months ago by
nexis-gpt
smirki 
in Tesslate/OmniCoder-9B about 2 months ago

35b variant?

👍 4
9
#2 opened about 2 months ago by
dagbs
smirki 
in Tesslate/OmniCoder-9B-GGUF about 2 months ago

Error on LMStudio.

5
#1 opened about 2 months ago by
ntp777
smirki 
posted an update 2 months ago
view post
Post
672
Introducing OmniCoder-9B

We trained a 9B coding agent on 425K real agentic trajectories from Claude Opus 4.6, GPT-5.4, GPT-5.3-Codex, and Gemini 3.1 Pro across Claude Code, OpenCode, Codex, and Droid scaffolding.

Results:
- GPQA Diamond: 83.8 pass@1 (166/198), 86.4 pass@3 — above GPT-OSS-120B (80.1), Qwen3.5-9B (81.7), and Claude Haiku 4.5 (73)
- AIME 2025: 90 pass@5 (27/30)
- Terminal-Bench 2.0: 28.1 (25/89) — +8.1 points over base model

The key insight: We trained on what frontier agents actually do, real tool calls, real error recovery, real edit diffs. The model learns read-before-write patterns, responds to LSP diagnostic, and applies minimal diffs instead of full rewrites.

Base: Qwen3.5-9B. LoRA SFT, 4x H200, Axolotl, 99.35% packing efficiency.

Weights:
Tesslate
huggingface.co/Tesslate/OmniCoder-9B
GGUF: huggingface.co/Tesslate/OmniCoder-9B-GGUF
Apache 2.0. Run it locally.