| --- |
| base_model: Qwen/Qwen3-4B |
| library_name: peft |
| tags: |
| - lora |
| - qwen3 |
| - nurbs |
| - cad |
| - text-to-cad |
| - peft |
| - ms-swift |
| license: apache-2.0 |
| datasets: |
| - SadilKhan/PartABC |
| language: |
| - en |
| pipeline_tag: text-to-3d |
| --- |
| |
| <style> |
| .nurbgen-text { |
| font-family: 'Arial', sans-serif; |
| font-weight: 800; |
| font-size: 5rem; |
| background: linear-gradient( |
| 90deg, |
| #a78bfa, #60a5fa, #34d399, |
| #fbbf24, #f87171, #c084fc, |
| #60a5fa, #34d399 |
| ); |
| background-size: 300% 100%; |
| -webkit-background-clip: text; |
| -webkit-text-fill-color: transparent; |
| background-clip: text; |
| } |
| </style> |
| |
|
|
|
|
| <div align="center"> |
|
|
| <span class="nurbgen-text">NURBGen</span> <h1> High-Fidelity Text-to-CAD Generation through <br> |
| LLM-Driven NURBS Modeling</h1> |
|
|
| |
| [Muhammad Usama*](https://scholar.google.com/citations?user=zcRPmUoAAAAJ&hl=en) 路 [Mohammad Sadil Khan*](https://scholar.google.com/citations?user=XIDQo_IAAAAJ&hl=en&authuser=1) 路 [Didier Stricker](https://scholar.google.com/citations?hl=en&authuser=1&user=ImhXfxgAAAAJ) 路 [Muhammad Zeshan Afzal](https://scholar.google.com/citations?user=kHMVj6oAAAAJ&hl=en&authuser=1&oi=ao) |
| |
| _*equally contributing first authors_ |
|
|
| <div style="display: flex; justify-content: center; gap: 10px;"> |
| <a href="https://arxiv.org/abs/2511.06194"> |
| <img src="https://img.shields.io/badge/Arxiv-3498db?style=for-the-badge&logoWidth=40&logoColor=white" alt="Paper" /> |
| </a> |
| <a href="https://muhammadusama100.github.io/NURBGen-Project/"> |
| <img src="https://img.shields.io/badge/Project-2ecc71?style=for-the-badge&logoWidth=40&logoColor=white" alt="Project" /> |
| </a> |
| <a href="https://github.com/SadilKhan"> |
| <img src="https://img.shields.io/badge/Code-89AAE6?style=for-the-badge&logoWidth=40&logoColor=white" alt="Code" /> |
| </a> |
| </div> |
| |
| <img src="https://readme-typing-svg.herokuapp.com?font=JetBrains+Mono&size=36&pause=1000¢er=true&vCenter=true&width=1000&height=75&color=0C7C59&lines=AAAI+2026" /> |
| </div> |
|
|
| --- |
|
|
| ## Model Details |
|
|
| | Property | Value | |
| |---|---| |
| | **Base model** | `Qwen/Qwen3-4B` | |
| | **Adapter type** | LoRA | |
| | **Fine-tuning framework** | [ms-swift](https://github.com/modelscope/ms-swift) | |
| | **Checkpoint step** | 180,000 | |
| --- |
|
|
| ## How to Use |
|
|
| ### Requirements |
|
|
| ```bash |
| pip install ms-swift transformers peft torch |
| ``` |
|
|
|
|
| ### Single Prompt (ms-swift) |
|
|
| ```python |
| from swift.llm import PtEngine, RequestConfig, InferRequest |
| |
| engine = PtEngine( |
| "Qwen/Qwen3-4B", |
| adapters=["SadilKhan/NURBGen"], |
| use_hf=True, |
| ) |
| |
| request_config = RequestConfig(max_tokens=8192, temperature=0.3) |
| |
| response = engine.infer( |
| [InferRequest(messages=[{"role": "user", "content": "Generate NURBS for the following: Design a small table with rounded edges and tapered legs. Include four dowel pins along one side for assembly. The table has chamfers at specific corners and fillets on its underside for smooth transitions. Dimensions: length 23.75 mm, width 70.00 mm, height 27.50 mm."}])], |
| request_config=request_config, |
| ) |
| |
| print(response[0].choices[0].message.content) |
| ``` |
|
|
| --- |
|
|
| ### Single Prompt (HuggingFace / PEFT) |
|
|
| ```python |
| import torch |
| from transformers import AutoTokenizer, AutoModelForCausalLM |
| from peft import PeftModel |
| |
| base_model_id = "Qwen/Qwen3-4B" |
| adapter_id = "SadilKhan/NURBGen" |
| |
| tokenizer = AutoTokenizer.from_pretrained(base_model_id) |
| model = AutoModelForCausalLM.from_pretrained( |
| base_model_id, |
| torch_dtype=torch.bfloat16, |
| device_map="auto", |
| ) |
| model = PeftModel.from_pretrained(model, adapter_id) |
| model.eval() |
| |
| prompt = "Generate NURBS for the following: Design a small table with rounded edges and tapered legs. Include four dowel pins along one side for assembly. The table has chamfers at specific corners and fillets on its underside for smooth transitions. Dimensions: length 23.75 mm, width 70.00 mm, height 27.50 mm." |
| messages = [{"role": "user", "content": prompt}] |
| |
| text = tokenizer.apply_chat_template( |
| messages, |
| tokenize=False, |
| add_generation_prompt=True, |
| ) |
| inputs = tokenizer(text, return_tensors="pt").to(model.device) |
| |
| with torch.no_grad(): |
| outputs = model.generate( |
| **inputs, |
| max_new_tokens=8192, |
| temperature=0.3, |
| do_sample=False, |
| ) |
| |
| result = tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True) |
| print(result) |
| ``` |
|
|
| ## Output Format |
|
|
| Each result is saved as `{uid}.json`: |
|
|
| ``` |
| UID : smooth_curved_surface |
| PROMPT : A smooth curved surface with 6 control points |
| ------------------------------------------------------------ |
| <generated NURBS representation> |
| ``` |
| --- |
|
|
| ## Citation |
|
|
| If you use NURBGen in your research, please cite: |
|
|
| ```bibtex |
| @inproceedings{usama2026nurbgen, |
| title={NURBGen: High-Fidelity Text-to-CAD Generation through LLM-Driven NURBS Modeling}, |
| author={Usama, Muhammad and Khan, Mohammad Sadil and Stricker, Didier and Afzal, Muhammad Zeshan}, |
| booktitle={Proceedings of the AAAI Conference on Artificial Intelligence}, |
| volume={40}, |
| number={12}, |
| pages={9603--9611}, |
| year={2026} |
| } |
| ``` |