Qwen3.5-9B OpenSCAD Instruction LoRA
This repository contains a LoRA fine-tuned on top of Qwen3.5-9B for generating and modifying OpenSCAD code from structured instructions.
Overview
- Base model: Qwen3.5-9B
- Method: LoRA (Low-Rank Adaptation)
- Task: Instruction-following OpenSCAD code generation
- Dataset size: ~139 samples
- Training method: Supervised fine-tuning (SFT) with response-only loss
Task Description
The model takes:
- A natural language instruction
- Existing OpenSCAD code
And generates:
- The updated full OpenSCAD program satisfying the instruction
Example
Input: Add one cube at an absolute world coordinate. Keep all existing geometry unchanged. The new cube must use corner placement (center=false), have size [4,5,6], and corner origin exactly at [0,0,0].
Existing OpenSCAD:
union() { translate([10, 0, 0]) sphere(r=1.2); }
Output:
union() { translate([10, 0, 0]) sphere(r=1.2); translate([0, 0, 0]) cube(size=[4,5,6], center=false); }
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer from peft import PeftModel
base_model = "Qwen/Qwen3.5-9B" lora_path = "Max2475/qwen3.5-9b-openscad-instruct-lora"
tokenizer = AutoTokenizer.from_pretrained(base_model) model = AutoModelForCausalLM.from_pretrained(base_model)
model = PeftModel.from_pretrained(model, lora_path)
Notes
- This repository contains a LoRA adapter, not the full model
- You must load the base model Qwen3.5-9B separately
- The model is optimized for structured CAD-style transformations
Limitations
- Trained on a small dataset (~139 samples)
- May overfit or not generalize well to unseen instructions
- Best performance on tasks similar to training data
License
This project is licensed under Apache 2.0.
The base model (Qwen3.5-9B) is subject to its original license.