File size: 5,312 Bytes
775ec1a 8b38771 775ec1a 8e04052 775ec1a 8e04052 775ec1a 8e04052 775ec1a 8e04052 775ec1a 8e04052 775ec1a 8e04052 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 | ---
base_model: unsloth/Qwen3-8B-Base-unsloth-bnb-4bit
tags:
- transformers
- qwen3
- Unsloth
- code
- agent
- Fine-tune
license: apache-2.0
language:
- en
datasets:
- TeichAI/MiniMax-M2.1-Code-SFT
- TeichAI/MiniMax-M2.1-8800x
- TeichAI/convo-v1
- AlicanKiraz0/Agentic-Chain-of-Thought-Coding-SFT-Dataset-v1.1
- TeichAI/claude-4.5-opus-high-reasoning-250x
pipeline_tag: text-generation
---
# LocalCodeViber
**LocalCodeViber** is a local-first agentic coding model built on [Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B), fine-tuned for tool-calling, multi-step code generation, and autonomous error recovery. Designed to run entirely on consumer hardware — no API, no cloud, no cost per token.
This is the SFT foundation model. Reinforcement learning is ongoing.
---
## What it does
LocalCodeViber was trained to operate as a coding agent — not just generate code, but use tools to read files, write files, run commands, search the web, and recover from failures just like a real developer would.
It can:
- Read and edit files in a workspace
- Write complete, working code from a single prompt
- Execute shell commands and interpret the output
- Recover from failed tool calls without giving up
- Create pull requests on GitHub repositories
- Think through problems step by step using native `<think>` tags before acting
---
## Model Details
| | |
|---|---|
| **Base Model** | Qwen3-8B-Base |
| **Architecture** | Qwen3 transformer, 36 layers |
## Training Data
LocalCodeViber was trained on a curated mix of 14,837 examples across 5 datasets:
| Dataset | Examples | Focus |
|---|---|---|
| [TeichAI/convo-v1](https://huggingface.co/datasets/TeichAI/convo-v1) | 777 | Conversational format, instruction following |
| [AlicanKiraz0/Agentic-Chain-of-Thought-Coding-SFT-Dataset-v1.1](https://huggingface.co/datasets/AlicanKiraz0/Agentic-Chain-of-Thought-Coding-SFT-Dataset-v1.1) | ~3,700 | Agentic reasoning and tool use |
| [TeichAI/MiniMax-M2.1-Code-SFT](https://huggingface.co/datasets/TeichAI/MiniMax-M2.1-Code-SFT) | ~1,300 | Agentic Code generation |
| [TeichAI/MiniMax-M2.1-8800x](https://huggingface.co/datasets/TeichAI/MiniMax-M2.1-8800x) | 8,800 | Diverse coding tasks |
| [TeichAI/claude-4.5-opus-high-reasoning-250x](https://huggingface.co/datasets/TeichAI/claude-4.5-opus-high-reasoning-250x) | 250 | High-quality reasoning traces |
The dataset mix emphasises real agentic tool-use patterns including failed tool calls that are identified, diagnosed, and corrected — giving the model genuine error recovery capability rather than just pattern matching on success cases.
---
## Tools
LocalCodeViber understands the following tool schema out of the box:
```json
["read_file", "write_file", "edit_file", "list_directory", "search_code", "run_command", "web_search"]
```
These match the tools in the training data. Pass them via the standard OpenAI tool calling API.
---
## Usage
### LM Studio (Recommended)
1. Download the GGUF version: [Bob-the-Koala/LocalCodeViber-GGUF](https://huggingface.co/Bob-the-Koala/LocalCodeViber-GGUF)
2. Load in LM Studio and break free from API costs!
### Ollama
```bash
ollama run hf.co/Bob-the-Koala/LocalCodeViber-GGUF:Q4_K_M
```
### Transformers
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"Bob-the-Koala/LocalCodeViber",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Bob-the-Koala/LocalCodeViber")
```
---
## GGUF Versions
Available in [Bob-the-Koala/LocalCodeViber-GGUF](https://huggingface.co/Bob-the-Koala/LocalCodeViber-GGUF):
| Quantization | Size | Use case |
|---|---|---|
| `Q4_K_M` | ~4.8 GB | Everyday use, best balance |
---
## System Prompt
For best results, use this system prompt:
```
You are a helpful coding assistant with access to file operations and code analysis tools.
Complete the user's task thoroughly and efficiently.
When given a coding task, create working code files in the workspace.
```
---
## Limitations
- Base model started from bnb-4bit weights — quality ceiling is below a full precision 8B model
- SFT only — reinforcement learning is in progress and will significantly improve reasoning quality
- Not suitable for tasks requiring knowledge past Qwen3's training cutoff
---
## Roadmap
- [ ] **LocalCodeViber-RL** — reinforcement learning on top of this SFT base, optimising for code correctness and task completion
- [ ] **LocalCodeViber-Claw** — fine-tuned specifically for [OpenClaw](https://github.com/openclaw/openclaw) skill schemas, channel routing, extra safety, and memory system
- [ ] **LocalCodeViber-14B** — same training recipe on Qwen3-14B for substantially higher capability
---
## Acknowledgements
LocalCodeViber was trained using [Unsloth](https://github.com/unslothai/unsloth) and would not exist without the datasets provided by [TeichAI](https://huggingface.co/TeichAI) and [AlicanKiraz0](https://huggingface.co/AlicanKiraz0).
---
## License
This model is released under the Apache 2.0 license
---
*Built by [Bob-the-Koala](https://huggingface.co/Bob-the-Koala)*
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |