YOLO-Coder-8B
Fix broken CLI commands. One command output. Runs 100% locally. Fine-tuned Qwen2.5-Coder-7B ยท MLX LoRA on Apple Silicon ยท No API key needed
| ๐ฏ Task | CLI error โ single bare bash fix command |
| ๐ Accuracy | 77.1% pipelineร3 ยท 59.2% raw LLM (beats GPT-4o) |
| ๐พ Size | ~4.4GB Q4_K_M GGUF ยท ~6GB RAM |
| โก Speed | 1โ3s on Apple Silicon |
| ๐ Privacy | 100% local ยท no API key ยท no telemetry |
Quickstart
ollama run hf.co/erdemozkan/YOLO-Coder-8B "ModuleNotFoundError: No module named 'flask'"
# โ pip install flask
That's it. No account. No cloud. No cost per call.
Benchmark โ YOLO-Bench
218 verified CLI errors ยท structural match scoring (flag-order-independent)
YOLO-Coder-8B pipelineร3 โโโโโโโโโโโโโโโโโโโโ 77.1% โ
best overall
YOLO-Coder-1.5B pipelineร3 โโโโโโโโโโโโโโโโโโ 71.1%
Claude Sonnet raw โโโโโโโโโโโโโโโโ 60.1%
YOLO-Coder-8B raw โโโโโโโโโโโโโโโ 59.2% โ
best offline
GPT-4o raw โโโโโโโโโโโโ 48.6%
YOLO-Coder-1.5B raw โโโโโโโโโโ 42.2%
| Mode | Structural Match |
|---|---|
| Raw LLM (no pipeline) | 59.2% |
| Pipeline ร 1 (interceptors + LLM) | 72.0% |
| Pipeline ร 3 (interceptors + memory + 3 LLM attempts) | 77.1% |
YOLO-Coder-8B pipelineร3 is the highest score of any model tested โ including GPT-4o and Claude Sonnet โ running entirely offline.
Scoring code and dataset: github.com/erdemozkan/YOLO-CODER/tree/main/benchmark
How the pipeline works
Your error โ [91 interceptors <1ms] โ [fix memory <5ms] โ [LLM 1-3s] โ Fix
โ ~50% of fixes stop here
Half of all fixes never reach the LLM. The model is the safety net, not the first guess.
Usage with YOLO-CODER
pip install yolo-coder
yoco python3 myapp.py # 8B is the default
yoco npm run dev
yoco --model hf.co/erdemozkan/YOLO-Coder-8B python3 myapp.py
Prompt format (ChatML)
<|im_start|>system
You are a CLI repair tool. Output ONLY a single bare bash command to fix the error. No explanation. No markdown. No backticks.<|im_end|>
<|im_start|>user
[Linux] $ python3 myapp.py
Error:
ModuleNotFoundError: No module named 'requests'
FIX:<|im_end|>
<|im_start|>assistant
pip install requests<|im_end|>
Training
"Trained on a MacBook Air. No rented A100s."
| Property | Value |
|---|---|
| Base model | Qwen/Qwen2.5-Coder-7B-Instruct |
| Fine-tune method | LoRA via MLX on Apple Silicon |
| LoRA rank / scale | 8 / 20.0 |
| Layers trained | 28 |
| Training iterations | 500 |
| Learning rate | 1e-5 |
| Training examples | 6,719 error/fix pairs across 15 categories |
| Export | Merged weights โ Q4_K_M GGUF for Ollama |
Files
| File | Description |
|---|---|
YOLO-Coder-8B-Q4_K_M.gguf |
Q4_K_M quantized GGUF (~4.4GB) โ use this with Ollama |
safetensors/ |
fp16 safetensors โ for further fine-tuning |
1.5B vs 8B
| YOLO-Coder-1.5B | YOLO-Coder-8B | |
|---|---|---|
| Size | ~941MB | ~4.4GB |
| RAM needed | ~2GB | ~6GB |
| Speed | <1s on Apple Silicon | 1โ3s on Apple Silicon |
| Raw accuracy | 42.2% | 59.2% |
| Pipelineร3 accuracy | 71.1% | 77.1% |
| Best for | Speed, low-RAM machines | Hard errors, best accuracy |
Limitations
- Single-command output only โ not designed for multi-step fixes without a wrapper
- Complex or highly novel errors may produce suboptimal output
- Not a general-purpose coding assistant
License
MIT
- Downloads last month
- 2,194
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support