STEP-LLM: Generating CAD STEP Models from Natural Language with Large Language Models
Paper • 2601.12641 • Published • 1
LoRA adapter for STEP-LLM, fine-tuned on Llama-3.2-3B-Instruct to generate ISO 10303-21 STEP files from natural language descriptions.
Paper: STEP-LLM: Generating CAD STEP Models from Natural Language with Large Language Models (DATE 2026)
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.2-3B-Instruct")
model = PeftModel.from_pretrained(base_model, "JasonShiii/step-llm-llama3b")
tokenizer = AutoTokenizer.from_pretrained("JasonShiii/step-llm-llama3b")
Or use the inference script from the GitHub repo:
python generate_step.py \
--ckpt_path JasonShiii/step-llm-llama3b \
--caption "A cylindrical bolt with a hexagonal head"
| Parameter | Value |
|---|---|
| Base model | Llama-3.2-3B-Instruct |
| LoRA rank (r) | 16 |
| lora_alpha | 16 |
| Learning rate | 5e-5 |
| Batch size | 2 (x4 grad accum = effective 8) |
| max_seq_length | 16384 |
| Training data | ~20k STEP files, 0-500 entities |
@article{shi2026step,
title={STEP-LLM: Generating CAD STEP Models from Natural Language with Large Language Models},
author={Shi, Xiangyu and Ding, Junyang and Zhao, Xu and Zhan, Sinong and Mohapatra, Payal
and Quispe, Daniel and Welbeck, Kojo and Cao, Jian and Chen, Wei and Guo, Ping and others},
journal={arXiv preprint arXiv:2601.12641},
year={2026}
}
Base model
meta-llama/Llama-3.2-3B-Instruct