ue-expert-v2 (Qwen2.5-Coder-14B-Instruct + SFT)
Fine-tuned Unreal Engine 5 expert model, specialized in C++ and Blueprint development.
Model Details
- Base model: Qwen2.5-Coder-14B-Instruct
- Fine-tuning: QLoRA (rank 32, alpha 64) SFT on 27,738 curated UE5 Q&A pairs
- Negative examples: 9.3% of training data teaches the model to say "I don't know" for hallucinated/non-existent APIs
- Quantization: Q4_K_M (4.87 bits per weight)
- Size: ~8.4 GB GGUF
Training Data
- 27,738 training pairs covering UE5 C++ APIs, Blueprint patterns, architecture, and best practices
- 7 template categories for positive examples (hierarchy, API lookup, code patterns, etc.)
- Negative examples include fabricated class names, non-existent functions, and plausible-but-wrong API claims
- Quality-gated: all pairs scored >= 0.4 by automated quality pipeline
Usage with Ollama
# Download the GGUF and Modelfile, then:
ollama create ue-expert -f Modelfile
ollama run ue-expert "What is the parent class of ACharacter?"
Files
model-q4_k_m.gguf— Quantized model (Q4_K_M, 8.4 GB)sft_adapter/— LoRA adapter weights (for further fine-tuning)Modelfile— Ollama model definition with ChatML template
Training Metrics
| Metric | Value |
|---|---|
| Steps | 1299 (3 epochs) |
| Final train loss | 0.668 |
| Final eval loss | 0.677 |
| Hardware | RunPod A100 SXM 80GB |
| Training time | ~3h 46m |
| VRAM usage | 15.8 GB / 80 GB |
Part of game-dev-docs
This model is the synthesis layer for a RAG + fine-tuned model + MCP server pipeline that provides deep Unreal Engine awareness to Claude Code. The RAG pipeline provides retrieval over 302K indexed documentation chunks; this model provides internalized domain knowledge for synthesis and judgment calls.
Model tree for o-duffy/ue-expert-v2
Base model
Qwen/Qwen2.5-14B
Finetuned
Qwen/Qwen2.5-Coder-14B
Finetuned
Qwen/Qwen2.5-Coder-14B-Instruct