Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -6,53 +6,58 @@ tags:
|
|
| 6 |
- fine-tuned
|
| 7 |
- tool-calling
|
| 8 |
- mcp
|
|
|
|
| 9 |
---
|
| 10 |
|
| 11 |
-
# ecu-pilot
|
| 12 |
|
| 13 |
-
|
| 14 |
|
| 15 |
-
|
| 16 |
|
| 17 |
-
##
|
| 18 |
|
| 19 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 20 |
|
| 21 |
-
|
| 22 |
-
> "What's the blast radius of changing stg_orders?"
|
| 23 |
-
|
| 24 |
-
<think>
|
| 25 |
-
Goal: pre-refactor impact analysis
|
| 26 |
-
Tools: node, impact, report
|
| 27 |
-
</think>
|
| 28 |
|
| 29 |
-
|
| 30 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 31 |
```
|
| 32 |
|
| 33 |
-
##
|
| 34 |
|
| 35 |
-
|
| 36 |
-
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
- **Training data**: 1,206 examples with real tool responses from a real project index. Nothing hallucinated.
|
| 42 |
|
| 43 |
-
##
|
| 44 |
|
| 45 |
-
|
| 46 |
-
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 47 |
-
|
| 48 |
-
model = AutoModelForCausalLM.from_pretrained(mach-kernel/ecu-pilot-fp16, torch_dtype=bfloat16)
|
| 49 |
-
tokenizer = AutoTokenizer.from_pretrained(mach-kernel/ecu-pilot-fp16)
|
| 50 |
-
```
|
| 51 |
|
| 52 |
-
|
|
|
|
| 53 |
|
| 54 |
-
|
| 55 |
|
| 56 |
## Why "ecu"
|
| 57 |
|
| 58 |
-
No reason. Just liked
|
|
|
|
| 6 |
- fine-tuned
|
| 7 |
- tool-calling
|
| 8 |
- mcp
|
| 9 |
+
- dbt
|
| 10 |
---
|
| 11 |
|
| 12 |
+
# ecu-pilot (FP16)
|
| 13 |
|
| 14 |
+
Fine-tuned [Qwen3.5-35B-A3B-Base](https://huggingface.co/Qwen/Qwen3.5-35B-A3B-Base) for structured tool calling against project metadata via MCP.
|
| 15 |
|
| 16 |
+
Trained to accurately call 9 tools — lineage traversal, impact analysis, test coverage reporting, schema introspection, search, and more — with valid arguments and well-synthesized answers grounded in real tool output.
|
| 17 |
|
| 18 |
+
## Model details
|
| 19 |
|
| 20 |
+
| | |
|
| 21 |
+
|---|---|
|
| 22 |
+
| **Base model** | Qwen3.5-35B-A3B-Base |
|
| 23 |
+
| **Architecture** | Mixture of Experts (35B total, 3B active per token) |
|
| 24 |
+
| **Fine-tuning method** | bf16 LoRA (r=16, alpha=16) |
|
| 25 |
+
| **Training stages** | Stage 1: tool mechanics (1 epoch, 1,206 examples) / Stage 2: structured planning (2 epochs, 290 examples) |
|
| 26 |
+
| **Hardware** | NVIDIA H200 141GB, ~1 hour total |
|
| 27 |
+
| **Training data** | 1,206 ChatML examples with real tool responses from indexed project metadata |
|
| 28 |
|
| 29 |
+
## Usage
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 30 |
|
| 31 |
+
```python
|
| 32 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 33 |
+
import torch
|
| 34 |
+
|
| 35 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 36 |
+
"mach-kernel/ecu-pilot-fp16",
|
| 37 |
+
torch_dtype=torch.bfloat16,
|
| 38 |
+
device_map="auto",
|
| 39 |
+
)
|
| 40 |
+
tokenizer = AutoTokenizer.from_pretrained("mach-kernel/ecu-pilot-fp16")
|
| 41 |
```
|
| 42 |
|
| 43 |
+
## Quantized variants
|
| 44 |
|
| 45 |
+
| Format | Repository |
|
| 46 |
+
|--------|-----------|
|
| 47 |
+
| FP16 (this repo) | [mach-kernel/ecu-pilot-fp16](https://huggingface.co/mach-kernel/ecu-pilot-fp16) |
|
| 48 |
+
| LoRA adapter only | [mach-kernel/ecu-pilot-fp16-lora](https://huggingface.co/mach-kernel/ecu-pilot-fp16-lora) |
|
| 49 |
+
| GGUF Q4_K_M | [mach-kernel/ecu-pilot-q4km](https://huggingface.co/mach-kernel/ecu-pilot-q4km) |
|
| 50 |
+
| GGUF Q8_0 | [mach-kernel/ecu-pilot-q8_0](https://huggingface.co/mach-kernel/ecu-pilot-q8_0) |
|
|
|
|
| 51 |
|
| 52 |
+
## Training methodology
|
| 53 |
|
| 54 |
+
Two-stage supervised fine-tuning adapted from the [Thinkquel](https://arxiv.org/abs/2510.00186) methodology:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 55 |
|
| 56 |
+
1. **Stage 1 — Tool mechanics**: Teaches the model what tools exist, how to format calls, and how to interpret responses.
|
| 57 |
+
2. **Stage 2 — Structured planning**: Teaches the model to reason about *when* and *why* to call tools using `<think>` blocks before acting.
|
| 58 |
|
| 59 |
+
All training examples use real tool responses from an indexed project — no synthetic or hallucinated tool output.
|
| 60 |
|
| 61 |
## Why "ecu"
|
| 62 |
|
| 63 |
+
No particular reason. Just liked the sound of it.
|