Datasets:
license: cc-by-4.0
task_categories:
- text-generation
language:
- en
tags:
- CAD
- STEP
- text-to-CAD
- ABC-dataset
- rendering
pretty_name: STEP-LLM Dataset
size_categories:
- 10K<n<100K
STEP-LLM Dataset
Rendered multi-view images of ABC Dataset STEP files, used in our DATE 2026 paper:
"STEP-LLM: Generating CAD STEP Models from Natural Language with Large Language Models"
Dataset Contents
Rendered JPEG images of CAD models from the ABC Dataset (NYU), organized by entity count. These images were used with GPT-4o to generate natural language captions for training STEP-LLM.
| Folder | Entity range | # Models | Description |
|---|---|---|---|
step_under500_image/ |
0–500 entities | ~20k | Used in DATE 2026 paper |
step_500-1000_image/ |
500–1000 entities | ~17k | Ongoing journal extension |
Each folder contains 10 per-chunk zip files (one per ABC dataset chunk abc_0001 – abc_0010):
step_under500_image/abc_000N_step_v00_under500_image.zip(~25 MB each)step_500-1000_image/abc_000N_step_v00_500-1000.zip(~75–85 MB each)
Download
from huggingface_hub import hf_hub_download
# Download a single chunk (under-500 entities, chunk 1)
hf_hub_download(
repo_id="JasonShiii/STEP-LLM-dataset",
filename="step_under500_image/abc_0001_step_v00_under500_image.zip",
repo_type="dataset",
local_dir="./data"
)
Or download all chunks:
huggingface-cli download JasonShiii/STEP-LLM-dataset --repo-type dataset --local-dir ./data
Usage
These images are used in data_preparation/captioning.ipynb (see GitHub repo) to generate GPT-4o captions, which become the natural language training inputs for STEP-LLM.
The released captions are available directly in the GitHub repo (cad_captions_0-500.csv, cad_captions_500-1000.csv) — you do not need to download these images to train or run inference with STEP-LLM.
License
Images are derived from the ABC Dataset and released under CC BY 4.0.
Citation
@article{shi2026step,
title={STEP-LLM: Generating CAD STEP Models from Natural Language with Large Language Models},
author={Shi, Xiangyu and Ding, Junyang and Zhao, Xu and Zhan, Sinong and Mohapatra, Payal
and Quispe, Daniel and Welbeck, Kojo and Cao, Jian and Chen, Wei and Guo, Ping and others},
journal={arXiv preprint arXiv:2601.12641},
year={2026}
}