|
|
---
|
|
|
license: apache-2.0
|
|
|
base_model:
|
|
|
- Qwen/Qwen2.5-7B-Instruct
|
|
|
pipeline_tag: text-to-3d
|
|
|
datasets:
|
|
|
- FreedomIntelligence/BlendNet
|
|
|
metrics:
|
|
|
- code_eval
|
|
|
tags:
|
|
|
- code
|
|
|
- render
|
|
|
- CAD
|
|
|
- 3D
|
|
|
- Modeling
|
|
|
- LLM
|
|
|
- bpy
|
|
|
- Blender
|
|
|
language:
|
|
|
- zho
|
|
|
- eng
|
|
|
- fra
|
|
|
- spa
|
|
|
- por
|
|
|
- deu
|
|
|
- ita
|
|
|
- rus
|
|
|
- jpn
|
|
|
- kor
|
|
|
- vie
|
|
|
- tha
|
|
|
- ara
|
|
|
---
|
|
|
|
|
|
# 🤖 BlenderLLM: Training Large Language Models for Computer-Aided Design with Self-improvement
|
|
|
|
|
|
**BlenderLLM** is built using **Qwen2.5-Coder-7B-Instruct** as the base model. It has been fine-tuned on the **BlendNet** training dataset and further optimized through **Self-improvement** techniques to achieve the best performance.
|
|
|
|
|
|
For more details, please visit our [GitHub repository](https://github.com/FreedomIntelligence/BlenderLLM) or refer to our [arXiv paper](https://www.arxiv.org/abs/2412.14203).
|
|
|
|
|
|
## 📖 Citation
|
|
|
```angular2
|
|
|
@misc{du2024blenderllmtraininglargelanguage,
|
|
|
title={BlenderLLM: Training Large Language Models for Computer-Aided Design with Self-improvement},
|
|
|
author={Yuhao Du and Shunian Chen and Wenbo Zan and Peizhao Li and Mingxuan Wang and Dingjie Song and Bo Li and Yan Hu and Benyou Wang},
|
|
|
year={2024},
|
|
|
eprint={2412.14203},
|
|
|
archivePrefix={arXiv},
|
|
|
primaryClass={cs.HC},
|
|
|
url={https://arxiv.org/abs/2412.14203},
|
|
|
}
|
|
|
```
|
|
|
|
|
|
We are from the School of Data Science (SDS), the Chinese University of Hong Kong, Shenzhen (CUHKSZ).
|
|
|
|