File size: 2,517 Bytes
a772819
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
---

license: apache-2.0
language:
- en
tags:
- unity
- unity3d
- game-development
- csharp
- xr
- vr
- ar
- openxr
- code
- finetuned
- lora
- qlora
base_model: Qwen/Qwen3-Coder-30B-A3B-Instruct
datasets:
- vishnuOI/unity-dev-instructions
---


# Unity Coder 30B

A QLoRA fine-tuned adapter for [Qwen3-Coder-30B-A3B-Instruct](https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct)
specialized for Unity game development in C#.

## Training

- **Base model**: Qwen/Qwen3-Coder-30B-A3B-Instruct (30B MoE, 3B active params)
- **Method**: QLoRA (4-bit NF4, r=16, alpha=32, target: q/k/v/o/gate/up/down projections)
- **Dataset**: [vishnuOI/unity-dev-instructions](https://huggingface.co/datasets/vishnuOI/unity-dev-instructions)
- **Training pairs**: 16,604 Unity instruction pairs
- **Sources**: Unity docs (32K pairs scraped), Stack Overflow [unity3d], GitHub Unity C# repos
- **Hardware**: 2x NVIDIA A100 80GB PCIe

## Capabilities

- Unity C# scripting (MonoBehaviour, ScriptableObjects, coroutines, events)
- XR/VR development (OpenXR, XR Interaction Toolkit, spatial anchors)
- Physics, animation, UI Toolkit, NavMesh
- URP/HDRP shaders and rendering
- DOTS/ECS/Burst/Jobs performance patterns
- Editor scripting and tooling

## Usage

```python

from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig

from peft import PeftModel

import torch



base_model_id = "Qwen/Qwen3-Coder-30B-A3B-Instruct"

adapter_id    = "vishnuOI/unity-coder-30b"



bnb_config = BitsAndBytesConfig(

    load_in_4bit=True,

    bnb_4bit_quant_type="nf4",

    bnb_4bit_compute_dtype=torch.bfloat16,

)



tokenizer = AutoTokenizer.from_pretrained(base_model_id)

model = AutoModelForCausalLM.from_pretrained(

    base_model_id,

    quantization_config=bnb_config,

    device_map="auto",

    torch_dtype=torch.bfloat16,

)

model = PeftModel.from_pretrained(model, adapter_id)

model.eval()



messages = [

    {"role": "system", "content": "You are an expert Unity game developer."},

    {"role": "user",   "content": "Write a MonoBehaviour that spawns enemies at random positions."},

]

text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

inputs = tokenizer(text, return_tensors="pt").to(model.device)



with torch.no_grad():

    out = model.generate(**inputs, max_new_tokens=512, temperature=0.1, do_sample=True)

print(tokenizer.decode(out[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True))

```