--- license: apache-2.0 language: - en tags: - unity - unity3d - game-development - csharp - xr - vr - ar - openxr - code - finetuned - lora - qlora base_model: Qwen/Qwen3-Coder-30B-A3B-Instruct datasets: - vishnuOI/unity-dev-instructions --- # Unity Coder 30B A QLoRA fine-tuned adapter for [Qwen3-Coder-30B-A3B-Instruct](https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct) specialized for Unity game development in C#. ## Training - **Base model**: Qwen/Qwen3-Coder-30B-A3B-Instruct (30B MoE, 3B active params) - **Method**: QLoRA (4-bit NF4, r=16, alpha=32, target: q/k/v/o/gate/up/down projections) - **Dataset**: [vishnuOI/unity-dev-instructions](https://huggingface.co/datasets/vishnuOI/unity-dev-instructions) - **Training pairs**: 16,604 Unity instruction pairs - **Sources**: Unity docs (32K pairs scraped), Stack Overflow [unity3d], GitHub Unity C# repos - **Hardware**: 2x NVIDIA A100 80GB PCIe ## Capabilities - Unity C# scripting (MonoBehaviour, ScriptableObjects, coroutines, events) - XR/VR development (OpenXR, XR Interaction Toolkit, spatial anchors) - Physics, animation, UI Toolkit, NavMesh - URP/HDRP shaders and rendering - DOTS/ECS/Burst/Jobs performance patterns - Editor scripting and tooling ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig from peft import PeftModel import torch base_model_id = "Qwen/Qwen3-Coder-30B-A3B-Instruct" adapter_id = "vishnuOI/unity-coder-30b" bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16, ) tokenizer = AutoTokenizer.from_pretrained(base_model_id) model = AutoModelForCausalLM.from_pretrained( base_model_id, quantization_config=bnb_config, device_map="auto", torch_dtype=torch.bfloat16, ) model = PeftModel.from_pretrained(model, adapter_id) model.eval() messages = [ {"role": "system", "content": "You are an expert Unity game developer."}, {"role": "user", "content": "Write a MonoBehaviour that spawns enemies at random positions."}, ] text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) inputs = tokenizer(text, return_tensors="pt").to(model.device) with torch.no_grad(): out = model.generate(**inputs, max_new_tokens=512, temperature=0.1, do_sample=True) print(tokenizer.decode(out[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True)) ```