Rhino Coder 7B โ LoRA Adapter
A LoRA adapter for Qwen2.5-Coder-7B-Instruct, fine-tuned for Rhino3D Python scripting โ generating correct rhinoscriptsyntax and RhinoCommon code from natural language instructions.
This is the standalone LoRA adapter (~660 MB). For the full fused model, see rhino-coder-7b.
Usage
With MLX (Apple Silicon)
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load(
"Qwen/Qwen2.5-Coder-7B-Instruct",
adapter_path="quocvibui/rhino-coder-7b-lora"
)
messages = [
{"role": "system", "content": "You are an expert Rhino3D Python programmer. Write clean, working scripts using rhinoscriptsyntax and RhinoCommon. Include all necessary imports. Only output code, no explanations unless asked."},
{"role": "user", "content": "Create a 10x10 grid of spheres with radius 0.5"},
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
output = generate(model, tokenizer, prompt=prompt, max_tokens=1024)
print(output)
As an OpenAI-compatible server
mlx_lm server \
--model Qwen/Qwen2.5-Coder-7B-Instruct \
--adapter-path quocvibui/rhino-coder-7b-lora \
--port 8080
Adapter Details
| Parameter | Value |
|---|---|
| LoRA rank | 8 |
| LoRA scale | 20.0 |
| LoRA dropout | 0.0 |
| LoRA layers | 16 / 28 |
| Adapter size | ~660 MB |
Training Details
| Parameter | Value |
|---|---|
| Base model | Qwen2.5-Coder-7B-Instruct (4-bit) |
| Batch size | 1 |
| Learning rate | 1e-5 |
| Optimizer | Adam |
| Max sequence length | 2,048 |
| Iterations | 9,108 (2 epochs) |
| Validation loss | 0.184 |
| Training time | ~1.2 hours on M2 Max |
Dataset
5,060 instruction-code pairs for Rhino3D Python scripting:
| Source | Count |
|---|---|
| RhinoCommon API docs | 1,355 |
| RhinoScriptSyntax source | 926 |
| Official samples | 93 |
| Synthetic generation | 187 |
| Backlabeled GitHub | 1 |
Links
Hardware compatibility
Log In to add your hardware
Quantized