metadata
base_model: Qwen/Qwen3-4B
library_name: peft
tags:
- lora
- qwen3
- nurbs
- cad
- text-to-cad
- peft
- ms-swift
license: apache-2.0
datasets:
- SadilKhan/PartABC
language:
- en
pipeline_tag: text-to-3d
NURBGen
High-Fidelity Text-to-CAD Generation through
LLM-Driven NURBS Modeling
Muhammad Usama* 路 Mohammad Sadil Khan* 路 Didier Stricker 路 Muhammad Zeshan Afzal
*equally contributing first authors
Model Details
| Property | Value |
|---|---|
| Base model | Qwen/Qwen3-4B |
| Adapter type | LoRA |
| Fine-tuning framework | ms-swift |
| Checkpoint step | 180,000 |
How to Use
Requirements
pip install ms-swift transformers peft torch
Single Prompt (ms-swift)
from swift.llm import PtEngine, RequestConfig, InferRequest
engine = PtEngine(
"Qwen/Qwen3-4B",
adapters=["SadilKhan/NURBGen"],
use_hf=True,
)
request_config = RequestConfig(max_tokens=8192, temperature=0.3)
response = engine.infer(
[InferRequest(messages=[{"role": "user", "content": "Generate NURBS for the following: Design a small table with rounded edges and tapered legs. Include four dowel pins along one side for assembly. The table has chamfers at specific corners and fillets on its underside for smooth transitions. Dimensions: length 23.75 mm, width 70.00 mm, height 27.50 mm."}])],
request_config=request_config,
)
print(response[0].choices[0].message.content)
Single Prompt (HuggingFace / PEFT)
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
base_model_id = "Qwen/Qwen3-4B"
adapter_id = "SadilKhan/NURBGen"
tokenizer = AutoTokenizer.from_pretrained(base_model_id)
model = AutoModelForCausalLM.from_pretrained(
base_model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
model = PeftModel.from_pretrained(model, adapter_id)
model.eval()
prompt = "Generate NURBS for the following: Design a small table with rounded edges and tapered legs. Include four dowel pins along one side for assembly. The table has chamfers at specific corners and fillets on its underside for smooth transitions. Dimensions: length 23.75 mm, width 70.00 mm, height 27.50 mm."
messages = [{"role": "user", "content": prompt}]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=8192,
temperature=0.3,
do_sample=False,
)
result = tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True)
print(result)
Output Format
Each result is saved as {uid}.json:
UID : smooth_curved_surface
PROMPT : A smooth curved surface with 6 control points
------------------------------------------------------------
<generated NURBS representation>
Citation
If you use NURBGen in your research, please cite:
@inproceedings{usama2026nurbgen,
title={NURBGen: High-Fidelity Text-to-CAD Generation through LLM-Driven NURBS Modeling},
author={Usama, Muhammad and Khan, Mohammad Sadil and Stricker, Didier and Afzal, Muhammad Zeshan},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={40},
number={12},
pages={9603--9611},
year={2026}
}